Dedicated Server

From HerzbubeWiki
Jump to navigation Jump to search

This page has information about the dedicated server hardware that is powering this wiki and other services from my self-hosting project.


Hosting

The server is currently hosted by green.ch.


Hardware

TODO


Networking

Server name and IP address

  • Server name = pelargir.herzbube.ch
  • Server IP address = 82.195.228.21
  • Gateway = 82.195.228.17
  • Subnet = 255.255.255.248


Subnet

The subnet includes 8 IP addresses, but green.ch allows me to use only 3 of them. TODO: Which are the other two IP addresses?


Reverse DNS

In order to be able to send out mail from the server, it is necessary to have a reverse DNS entry for the server's IP address. This is not a technical requirement for SMTP, but is an established convention so that mail receiving servers are better able to combat spam.

green.ch configures reverse DNS entries upon request via support ticket.

Currently 82.195.228.21 points back to pelargir.herzbube.ch.

root@pelargir:~# dig -x 82.195.228.21

; <<>> DiG 9.9.5-9+deb8u6-Debian <<>> -x 82.195.228.21
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27612
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;21.228.195.82.in-addr.arpa.	IN	PTR

;; ANSWER SECTION:
21.228.195.82.in-addr.arpa. 310	IN	PTR	pelargir.herzbube.ch.

;; AUTHORITY SECTION:
228.195.82.in-addr.arpa. 310	IN	NS	ns4.genotec.ch.
228.195.82.in-addr.arpa. 310	IN	NS	ns3.genotec.ch.
228.195.82.in-addr.arpa. 310	IN	NS	ns1.genotec.ch.
228.195.82.in-addr.arpa. 310	IN	NS	ns2.genotec.ch.

;; Query time: 771 msec
;; SERVER: 146.228.101.28#53(146.228.101.28)
;; WHEN: Wed Jun 29 17:24:22 CEST 2016
;; MSG SIZE  rcvd: 169


External services

ILO

ILO (Integrated Lights-Out) is a proprietary technology by HP for remote administration of servers manufactured by HP. The main purpose of ILO is so that I can remotely reboot the server in case I am no longer able to connect via SSH.


ILO is reachable under its own dedicated IP address:


Configuration changes

  • Administration > User Administration > Change user password
  • Administration > Access Settings > SSH Access = Disabled


Backup space

TODO


Disk

The system has two physical disks which are 500 GB in size. The physical disks are joined into a software RAID level 1.

Here is the fdisk output that documents the partitioning:

root@pelargir:~# fdisk /dev/sda

Command (m for help): p
Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x3ba6d316

Device     Boot    Start       End   Sectors   Size Id Type
/dev/sda1  *        2048   3905535   3903488   1.9G fd Linux raid autodetect
/dev/sda2        3907582 976771071 972863490 463.9G  5 Extended
/dev/sda5        3907584  11718655   7811072   3.7G fd Linux raid autodetect
/dev/sda6       11720704 976771071 965050368 460.2G fd Linux raid autodetect



root@pelargir:~# fdisk /dev/sdb

Command (m for help): p
Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00071581

Device     Boot    Start       End   Sectors   Size Id Type
/dev/sdb1  *        2048   3905535   3903488   1.9G fd Linux raid autodetect
/dev/sdb2        3905536 976773167 972867632 463.9G  5 Extended
/dev/sdb5        3907584  11718655   7811072   3.7G fd Linux raid autodetect
/dev/sdb6       11720704 976771071 965050368 460.2G fd Linux raid autodetect


Here's the software RAID 1 configuration:

root@pelargir:~# cat /proc/mdstat
Personalities : [raid1] 
md2 : active raid1 sdb6[2] sda6[1]
      482394112 blocks super 1.2 [2/2] [UU]
      bitmap: 2/4 pages [8KB], 65536KB chunk

md1 : active raid1 sdb5[2] sda5[1]
      3903488 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sdb1[2] sda1[1]
      1950720 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>


root@pelargir:~# cat /etc/mdadm/mdadm.conf
[...]
ARRAY /dev/md/0  metadata=1.2 UUID=343cebdf:6a730922:a65adb56:bcf8f0b6 name=GDP-LIN-228:0
ARRAY /dev/md/1  metadata=1.2 UUID=66771b06:e2572534:917232bc:2149095f name=GDP-LIN-228:1
ARRAY /dev/md/2  metadata=1.2 UUID=ab9e3d36:c8fd57b7:99789424:8e585f00 name=GDP-LIN-228:2
[...]


Here are the system's block device UUIDs:

root@pelargir:~# blkid
/dev/sda1: UUID="343cebdf-6a73-0922-a65a-db56bcf8f0b6" UUID_SUB="611cbb22-d9e2-945e-33d6-39b8ed2362d1" LABEL="GDP-LIN-228:0" TYPE="linux_raid_member" PARTUUID="3ba6d316-01"
/dev/sda5: UUID="66771b06-e257-2534-9172-32bc2149095f" UUID_SUB="dc23693f-b641-e812-20f1-153e15dc67f2" LABEL="GDP-LIN-228:1" TYPE="linux_raid_member" PARTUUID="3ba6d316-05"
/dev/sda6: UUID="ab9e3d36-c8fd-57b7-9978-94248e585f00" UUID_SUB="802807cc-7c25-44ba-69ea-5fe22890e2d4" LABEL="GDP-LIN-228:2" TYPE="linux_raid_member" PARTUUID="3ba6d316-06"
/dev/sdb1: UUID="343cebdf-6a73-0922-a65a-db56bcf8f0b6" UUID_SUB="493871a7-0be6-0a65-e454-cda98273e2e0" LABEL="GDP-LIN-228:0" TYPE="linux_raid_member" PARTUUID="00071581-01"
/dev/sdb5: UUID="66771b06-e257-2534-9172-32bc2149095f" UUID_SUB="ef471e39-e213-27a2-ba5b-6bd76067e1df" LABEL="GDP-LIN-228:1" TYPE="linux_raid_member" PARTUUID="00071581-05"
/dev/sdb6: UUID="ab9e3d36-c8fd-57b7-9978-94248e585f00" UUID_SUB="e20d6805-7115-3bbc-b346-3498af347507" LABEL="GDP-LIN-228:2" TYPE="linux_raid_member" PARTUUID="00071581-06"
/dev/md1: UUID="8b2e427d-92c8-472a-b07a-018993d9dfe9" TYPE="swap"
/dev/md0: UUID="e8b161be-6679-4476-95ab-4857e4a99074" TYPE="ext3"
/dev/md2: UUID="19348a31-6f2c-4f6a-88d5-3bc2a33aaf9e" TYPE="ext4"


And here's the filesystem setup:

root@pelargir:~ # df
Filesystem      Size  Used Avail Use% Mounted on
udev             10M     0   10M   0% /dev
tmpfs           785M   81M  704M  11% /run
/dev/md2        453G   65G  366G  15% /
tmpfs           2.0G  4.0K  2.0G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/md0        1.8G   57M  1.7G   4% /boot


I'm not entirely sure how the GRUB boot process works, but the following snippet from /boot/grub/grub.cnf seems to indicate that the root filesystem to mount during boot is /dev/md2:

set root='mduuid/ab9e3d36c8fd57b7997894248e585f00'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint='mduuid/ab9e3d36c8fd57b7997894248e585f00'  19348a31-6f2c-4f6a-88d5-3bc2a33aaf9e
else
  search --no-floppy --fs-uuid --set=root 19348a31-6f2c-4f6a-88d5-3bc2a33aaf9e
fi

Initial system configuration

Initially the system was configured like this:

  • Barebones Debian with only minimal packages installed
  • Debian distribution = jessie (i.e. a stable release)
  • Two system users: root, user01
  • SSH password login disabled for root, enabled for user01
  • Default editor is nano


A few basic changes I made to that configuration:

  • Disable the CD-ROM line in /etc/apt/sources.list which the green.ch admins had left there for my amusement
  • Rename "user01" to "patrick"
  • Add public key to root's ~/.ssh/authorized_keys to allow SSH certificate login
  • Install vim and make it the default editor


... and then the real fun began 🙂