R Consulting Appliances are pre-prepared, customized nanobsd system images.
They can be installed on a 4GB, or larger, SATA DOM, USB drive, SSD,
or hard-disk drive. GPT layout is used to divide the storage device on
two root file systems, a persistent configuration, and a persistent data
partitions. Root file system is read-only in runtime, with
/etc and /var directories being kept in RAM.
As such, any change made in /etc must be explicitly saved
or they will be lost upon system shutdown.
R Consulting Appliance is distributed as two files: a compressed disk image and this document. To install the disk image simply boot a live Unix OS on the target machine and execute
live# gunzip -c path/to/diskimage.gz | dd bs=131072 of=/dev/disk
On Linux disks are commonly named sda, sdb,
etc.; information on which device matches to which disk can be found in
boot messages (dmesg). On FreeBSD
camcontrol devlist and geom disk list commands
will list attached storage devices.
The cfg utility exists to manage /etc
changes. Executing just cfg will list modified files.
cfg save will list modified files and will ask whether to
keep each change. cfg save -y will save changes
unconditionally. For more information read cfg utility man
page: man cfg.
Upgrades are shipped as compressed root file system images. Upgrade file can be downloaded to a remote Unix system, then the following command should be executed:
remote# cat path/to/rootimage.gz | ssh appliance img update -z
Upgrade can also be applied from the appliance itself:
appliance# img update -z https://dropbox.rconsulting.bg/nanobsd/...
img utility will write the OS upgrade into alt root file
system partition and will instruct bootloader to boot from it upon next
system startup. After system reboot, if upgraded OS image works, as
intended, upgraded partition can be made persistently active using
img commit. Otherwise rebooting the system will revert to
previous root file system image.
Appliance contains a script to migrate/import existing configuration and it is also available at https://dropbox.rconsulting.bg/nanobsd/migrate-cfg, for convenience. The script supports three ways of collecting the existing configuration: by running it on…
… the target host before migration,
… the target host after migration,
… a local FreeBSD, accessing target host via
ssh.
Let’s look at each point in more detail:
To prepare things in advance, while OS to be replaced is still running, do the following:
Attach the storage device, where nanobsd appliance software is to be installed, on the target host;
Using the script, test if the existing configuration can be
imported, like this: /path/to/migrate-cfg -t;
Mount its configuration partition (use
gpart show -lp to find the right one; it is usually
addressable as /dev/ufs/cdncfg or
/dev/gpt/cdncfg; /mnt is a “spare” place to do
temporary mounts; example command would be
mount /dev/gpt/cdncfg /mnt);
Ask the script to import the existing configuration, like this:
/path/to/migrate-cfg -d /mnt/etc (pay attention to
/etc, following /mnt);
Unmount configuration partition.
To import existing configuration from a running appliance, do the following:
Mount configuration partition from the OS to be replaced (see
c. point above for pointers on how to locate legacy root
filesystem); example command could be
mount /dev/da0p2 /mnt;
Using the script, test if the existing configuration can be
imported, like this:
/path/to/migrate-cfg -s /mnt/etc -t;
Ask the script to import the existing configuration, like this:
/path/to/migrate-cfg -s /mnt/etc -d /etc;
Unmount legacy root filesystem.
To import configuration from a remote OS via
ssh follow same steps as the 1st case, but append
-s remotehost: parameter to migrate-cfg (pay
attention to the colon, following remotehost). Be aware,
that ssh have to be configured to use pre-shared keys or
the password will have to be typed multiple times.
The password for root account is initially blank.
sshd (secure shell daemon) is configured to allow only
pre-shared key authentication; using password-based login is, by
default, disabled.
Management IP address is used for configuration and monitoring, and
is part of the management subnet: an isolated, secured, private
network, where each machine is assigned an IP address from (for
example 192.168.123.0/24). Since not all servers should be
visible from the Internet (transcoders export no public services),
remote administration can be provided using port forwarding. However it
is important this network to be isolated and secured,
since lots of un-firewalled services are accessible within.
A R Consulting Appliance requires the following mandatory configuration steps, before it is fully active:
assign a valid name: add
hostname="sysname.example.com" to
/etc/rc.conf.d/hostname (or /etc/rc.conf) and
execute service hostname restart;
assign a valid management IP address: add
ifconfig_<netif>="inet 192.168.11.22/24 description 'mgmt'"
to /etc/rc.conf.d/network (or /etc/rc.conf)
and execute service netif restart; consult with
man rc.conf for more information;
(optionally) assign a valid gateway: add
defaultrouter="192.168.11.1" to
/etc/rc.conf.d/routing (or /etc/rc.conf) and
execute service routing restart; consult with
man rc.conf for more information;
configure DNS: populate /etc/resolv.conf;
synchronize clock: execute ntpdate pool.ntp.org and
start the time server with service ntpd start;
set local timezone by running tzsetup;
add host name aliases to /etc/hosts:
mgmt to resolve to system’s management IP address,
loghost to resolve to site’s syslog server,
monhub to resolve to site’s R Consulting Monitoring
server;
fix GPT with gpart recover <disk>;
save initial configuration with cfg save;
refer to Initial configuration checklist for the respective role.
cfg utility is able to verify initial configuration, to
an extent, with cfg test.
man intro.