Showing posts with label freenas. Show all posts
Showing posts with label freenas. Show all posts

Wednesday, December 23, 2015

Update on using Graphite with FreeNAS

A while back, I posted on using Graphite with FreeNAS. Well, there have been some changes with the recent versions, and this makes integrating Graphite with FreeNAS even easier, so it's time for an update. This applies to FreeNAS-9.3-STABLE.

FreeNAS collects metrics on itself using collectd. This is a nice program which does nothing but gather metrics, and gather them well. FreeNAS gathers basic metrics on itself - cpu, disk performance, disk space, network interfaces, memory, processes, swap, uptime, and ZFS stats, and logs it to RRD databases which can be accessed via the Reporting tab. However, as nice as that is, I much prefer the Graphite TSDB (time-series database) for storing and displaying metrics.

Previously, I was editing the collectd.conf directly, but since the collectd.conf is dynamically generated, and I'd have to add the same block of code each time that happened, I decided to move my additions to the collectd.conf into files stored on my zpool. I then just use the include directive added to the end of the native collectd.conf to call out those files. So, at this point, all I add to the native collectd.conf is this line:

Include "/mnt/sto/config/collectd/*.conf"

This makes my edits really easy, and allows me to create a script to check for it and fix it if necessary - more on that later.

In the /mnt/sto/config/collectd/ directory, I have several files - graphite.conf, hostname.conf, ntpd.conf, and ping.conf.

The graphite.conf loads and defines the write_graphite plugin:

LoadPlugin write_graphite
<Plugin "write_graphite">
  <Node "graphite">
    Host "graphite.example.net"
    Port "2003"
    Protocol "tcp"
    LogSendErrors true
    Prefix "servers."
    Postfix ""
    StoreRates true
    AlwaysAppendDS false
    EscapeCharacter "_"
  </Node>
</Plugin>

It's worth mentioning that some of the other TSDBs out there accept Graphite's native plain-text format, so this could be used with them just as well. Or, if you had another collectd host, you could use collectd's "network" plugin to send to those.

The hostname.conf redefines the hostname. The native collectd.conf uses "localhost", and that does no good when logging to a graphite server which is receiving metrics from many hosts, so I force it to the hostname of my FreeNAS system:

Hostname "nas"

In order for this to not break the Reporting tab in FreeNAS (not that I use that anymore with the metrics in Graphite) I first need to move the local RRD databases to my zpool by chcking "Reporting Database" under the "System Dataset" in the "System tab:



I then go to the RRD directory, move "localhost" to "nas", and then symlink nas to localhost:

lrwxr-xr-x   1 root  wheel       3 May 19  2015 localhost -> nas
drwxr-xr-x  83 root  wheel      83 Dec 20 10:23 nas

This way, redefining the hostname in collectd causes the RRD data to be written to the "nas" directory, but when the GUI looks for the "localhost" directory, it still finds what it's looking for and displays the metrics properly.

The ntpd.conf enables ntpd logging, which I use to monitor the time offsets on my FreeNAS box on my Icinga2 monitoring host:

LoadPlugin ntpd
<Plugin "ntpd">
        Host "localhost"
        Port 123
        ReverseLookups false
</Plugin>


Finally, ping.conf calls the Exec plugin to echo a value of "1" all the time:

LoadPlugin "exec"
<Plugin "ntpd">
  Exec "nobody:nobody" "/bin/echo" "PUTVAL nas/collectd/ping N:1"
</Plugin "ntpd">


I use this on my Icinga2 server to check the health of the collectd data, and have a dependency on this check for all the other Graphite-based checks. This way, if collectd breaks, I get alerted on collectd being broken - the actual problem. This prevents a flurry of alerts on all the things I'm checking from Graphite, which makes deciphering the actual problem more difficult.

So, I define the Graphite writer, I change the hostname so the metrics show up on the Graphite host with the proper servers.nas.* path, and I add two more groups of metrics to the default configuration. These configuration files are stored on my zpool, so even if my FreeNAS boot drive craps out (which actually happened last week) and I have to reload the OS from scratch, I don't lose these files.

Since I'm only adding one line to the bottom of the collectd.conf file, it becomes very easy to check for my additions, and if necessary, add them. I have a short script which I run via cron: (the "Tasks" tab in the FreeNAS GUI)

#!/bin/bash

# Set the file path and the line I want to add
conf=/etc/local/collectd.conf
inc='Include "/mnt/sto/config/collectd/*.conf"'

# Fail if I'm not running as root
if (( EUID ))
then
  echo "ERROR: Must be run as root. Exiting." >&2
  exit 1
fi

# Check to see if the line is in the config file
if grep -q Include $conf
then
    : All good, exit quietly.
else
    : Missing the include line! Add it!
    echo "$inc" >> $conf
    service collectd restart
    logger -p user.warn -t "collectd" \
         "Added Include line to collectd.conf and restarted."

    echo "Added include to collectd.conf" | \
         mail -s "Collectd fixed on NAS" mymyselfandi@example.com
fi


If I reboot my FreeNAS system, the collectd.conf gets reverted. Not a huge problem if I can wait no more than 30 minutes for my cron job to run, but in 9.3, I can do even better. I can call the script at boot time as a postinit script from the Init/Shutdown Scripts section of "Tasks":

 

This way, when I boot the system, it runs the check script, which sees the missing Include line, adds it automatically, and restarts collectd so it resumes logging to my Graphite server.

This setup has proven to be wonderfully reliable, and unless/until native Graphite support is added to FreeNAS, should keep on working.

Monday, August 31, 2015

Creating an access jail "jump box" on FreeNAS

If you wish to have external access to your network through SSH, it's a very good idea to have a very limited purpose "jump box" with the only external access, with that then tightly limited as to whom can log into it and what they can do when they get there. Here is what I've developed using a jail on a FreeNAS system.

I've stolen some ideas from DrKK's Definitive Guide to Installing OwnCloud in FreeNAS (or FreeBSD)

  1. Start with the latest version of FreeNAS. I'll leave it up to you to figure that part out.
  2. Create a standard jail, choose Advanced mode, make sure the IP is valid, and uncheck "VIMAGE"
  3. Log into the jail via "jls" and "jexec"
    jls
    sudo jexec access csh
  4. Remove all installed packages that aren't the pkg command:
    pkg info | awk '$1 !~ /^pkg-/ {print $1}' | xargs pkg remove -y
  5. Update installed files using the pkg command:
    pkg update
    pkg upgrade -y
    pkg will likely update itself.
  6. Install bash and openssh-portable via the pkg command:
    pkg install -y bash openssh-portable
     
  7. Move the old /etc/ssh directory to a safe place and create a symlink to /usr/local/etc
    mv /etc/ssh /etc/oldssh
    ln -s /usr/local/etc/ssh /etc/ssh
    NOTE: this step is purely for convenience and is not necessary but may avoid confusion since the native ssh files won't be used.
  8. Make sure your /usr/local/etc/sshd_config contains at least the following:
    Port 22
    AllowGroups user
    AddressFamily inet
    PermitRootLogin no
    PasswordAuthentication no
    PermitEmptyPasswords no
    PermitUserEnvironment yes
  9. Enable the openssh sshd and start it:
    echo openssh_enable=YES >> /etc/rc.conf
    service openssh start
  10. Verify that openssh is listening on port 22:
    sockstat -l4 | grep 22
  11. Create the users' restricted bin directory:
    mkdir -m 555 /home
    mkdir -m 0711 /home/bin
    chown root:wheel /home/bin

    This creates the directory owned by root and without read permission for the users.
  12. You can create symlinks in here for commands that the users will be allowed to run in their restricted shell. I prefer to take this a step farther - since it's only a jump box, its only purpose is to ssh in, and ssh on to another system. I further restrict this by creating a shell script wrapper around the ssh command which restricts the hosts that the user can login to from the jump box.

    If you have half a clue, you'll wonder how this prevents them from ssh'ing to another host when they get to one that they are allowed access to, and the answer is, if they have the permissions on that host - it doesn't. So it's not a fantastic level of security, but I wanted to see if I could do it. You'll also notice that you need to create a file /home/bin/sshauth.cfg which has the format of "username ALL" or "username host1 host2 ..." which dictates access.
  13. Symlink in the "logger" command to the /home/bin directory:
    ln -s /usr/bin/logger /home/bin
  14. Create the user group "user" (as called out in the sshd_config above) so the users can log in:
    pw groupadd user
  15. Create the users with each home directory under /home, with the shell /usr/local/bin/rbash, no password based authentication, and the group created in the previous step.
    adduser
  16. Change to the user's home directory and remove all the dot files
    cd /home/user
    rm .??*
  17. Create the following .bash_profile in the user's home directory:
    export PATH=/home/bin
    FROM=${SSH_CLIENT%% *}
    logger -p user.warn -t USER_LOGIN "User $LOGNAME logged in from $FROM"
    export HISTFILE=/dev/null
    [[ $TERM == xterm* ]] && echo -ne "\033]0;JAIL-$HOSTNAME\007"
    PS1="\!-$HOSTNAME\$ "
  18. The file permissions should be set, but confirm:
    chmod 644 .bash_profile
    chown root:wheel .bash_profile
  19. Create the ssh directory and give it to the user:
    mkdir -m 700 .ssh
    chown user:user .ssh
  20. Install the user's authorized_keys file in the ssh directory, and make sure the permissions are right:
    chown user:user .ssh/authorized_keys
    chmod 600 .ssh/authorized_keys
  21. Your user should be able to login at this point, and do nothing beyond what you've given them access to in the /home/bin directory.

Thursday, April 23, 2015

Logging FreeNAS performance data into Graphite

Update 12/23/2015 - I now have an updated post which supercedes this post.

Update 12/2/2015 - This information is dated, and there's a really good way to handle FreeNAS logging to Graphite with FreeNAS 9.3 that I need to document. I'll update this post with a link once I get that post done. In the meantime, this is just a placeholder.

FreeNAS is a great NAS appliance that I have been known to install just about anywhere I can. One of the things that makes it so cool is the native support for RRD graphs which you can view in the "Reporting" tab. What would be cooler, though, is if it could log its data to the seriously awesome metrics collection software Graphite. I've been working on getting Graphite installed at work, and have always wanted metrics collection at my house (because: nerd) and so once I got a Graphite server up and running, one of the first things I did was modify my FreeNAS system to point to the Graphite server.

Here are the steps I followed on FreeNAS 9.2.1.8.  I'm overdue for FreeNAS 9.3 and once I've done the upgrade, I'll update these instructions as necessary.


  1. Install a Graphite server.  Four little words which sound so easy, but mask the thrill and heartbreak that can come with trying to accomplish this task. I tried several guides and was a little daunted when I saw most of them mentioning how much of a pain in the ass installing Graphite can be, but then I managed to find this nice, simple guide that used the EPEL packages on a CentOS box. Following those instructions, I managed to get two CentOS 6 boxes up and running in pretty short order, and then with some slight modifications, I set up a CentOS 7 graphite server at home.
  2. Make sure port 2003 on your Graphite box is visible to your FreeNAS box. This usually involves opening some firewall rules.
  3. SSH into your FreeNAS box. (If you don't know what this means, you probably never got to this step, as "Install a Graphite server" would have entirely broken your will to live.) You will also need to log into your FreeNAS box as either root, or a user that has sudo permission.
  4. Edit the collectd config file:
    1. sudo vi /etc/local/collectd.conf
    2. At the top of the file, change "Hostname" from "localhost" to your hostname for the NAS. Otherwise, your NAS will report to the Graphite host as "localhost", and that's less than useful.

      Hostname "nastard"
      ...
    3. There is a block of lines all starting with "LoadPlugin". At the bottom of this block, add "LoadPlugin write_graphite":

      ...
      LoadPlugin processes
      LoadPlugin rrdtool
      LoadPlugin swap
      LoadPlugin uptime
      LoadPlugin syslog
      LoadPlugin write_graphite

      ...
    4. At the bottom of the file, add the following block, substituting the hostname for your graphite hostname:

      ...

       
          Host "graphite.example.net"
          Port "2003"
          Protocol "tcp"
          LogSendErrors true
          Prefix "servers."
          Postfix ""
          StoreRates true
          AlwaysAppendDS false
          EscapeCharacter "_"
       
  5. Change to the directory "/var/db/collectd/rrd" - this is where FreeNAS logs its RRD metrics that is visible in the GUI. If we just restart collectd, it's going to start logging under the hostname (since we changed that above) and that'll break the UI's RRD graphs. While we'll still have the data in graphite, we can have our graphite cake and RRD eat it too by doing the following steps.
  6. Shut down collectd:

    sudo service collectd stop
  7. Move the "localhost" directory (under /var/db/collectd/rrd) to whatever you set the hostname to in the collectd.conf above:

    sudo mv localhost nastard
  8. Symlink the directory back to "localhost":

    sudo ln -s nastard localhost
  9. Restart collectd:

    sudo service collectd start
  10. That's it! At this point, you can reload the FreeNAS GUI and see that you still have your RRD data, but more importantly, if you go to your Graphite GUI, you'll see that you should now be getting metrics.

Protips:

  • Collectd writes data every 10 seconds by default. If you write all your collectd data with the "servers." prefix as I've shown above, you can make sure your whisper files are configured for this interval with the following block in your /etc/carbon/storage-schemas.conf:

    [collectd]
    pattern = ^servers\.
    retentions = 10s:90d,1m:1y

    This will retain your full 10s metrics for a month (30d), 1 minute interval metrics for a year. That config results in each whisper file being 15MB, and with my NAS config (with 6 running jails) I have 220 whisper files for a total disk space of 1.2G. Considering disk space is pretty cheap, you could easily adjust these numbers up to retain more data for longer.  You should also read up on the graphite aggregator which controls how the data is parsed down when it's saved at lesser intervals.

    Thanks to Ben K for pointing out that more than one or two aggregations will greatly increase the amount of disk access. Initially I had a four stage aggregation, but that would require a crapload of access happening with each write. Since Graphite is very IO intensive to begin with, that's not a good idea.

Monday, February 24, 2014

Move large files with a progress bar

Sometimes I want to move around large files, and instead of just sitting there and wondering how they're going, I like to see a progress bar along the lines of an scp or rsync with the --progress option.  If you have the "pv" utility installed, this is quite easy.  The following command works with bash, ksh, or zsh:

for file in *; do echo $file; pv $file > /path/to/destination/$file && rm $file; done

Formatted for use in a script:

for file in *
do
    echo $file
    pv $file > /path/to/destination/$file && rm $file
done

Sunday, February 23, 2014

Triple mirroring on FreeNAS


I just set up a triple mirror on my home NAS:

NAME              STATE     READ WRITE CKSUM
sto               ONLINE       0     0     0
 mirror-0         ONLINE       0     0     0
   gptid/de1e...  ONLINE       0     0     0
   gptid/de63...  ONLINE       0     0     0
   gptid/c8a3...  ONLINE       0     0     0  (resilvering)



Triple mirror!  Three drives serving up the same exact one drive's worth of data, what is this insanity?  Paranoid much?  No, not really, it's step 1 of an upgrade.  I've been over the recommended 80% usage on my primary zpool for a couple weeks now, but with no "day" job, didn't think purchasing new drives was the best idea.  Well, I got some good news the other day, and I promptly celebrated by buying the drives I needed.  They arrived over the weekend, so now it's time to start the upgrade.

My original configuration was (2) 4TB drives in a simple mirror.  Many other folks set up their zpools as a RAIDZ (ZFS equivalent of a RAID-5) or a RAIDZ2 (ZFS version of RAID-6 with two parity disks) in order to get the most out of their storage, but I decided to keep my configuration simple and go with mirroring for a couple reasons:

  1. Performance.  Mirroring generally performs better than parity-based RAID setups as there's no math involved.  Does a home NAS need that much performance?  Probably not, but it's a nice perk.
  2. Ease of upgrades.  This is the real driving concern.  One of the limitations of ZFS is that you can expand, but you can only expand with a vdev similar to what you had before - or you can start completely from scratch.  With a mirror, you have the simplest form of a vdev, with two disks, so each time I want to upgrade (and you know upgrades always happen) I can upgrade two disks at a time.  If I went with a basic 3-drive RAIDZ, I would have to buy 3 more drives to add on another 3-drive RAIDZ to the zpool.  If, like some of my friends, I ponied up for a 5 or 6 drive setup -- my next upgrade would be 5-6 drives at a time, and suddenly I'm looking for a case that can handle that many.  So, sticking with 2 drives in a mirror allows me to add another two drives each time I'm ready for an upgrade.  Yes, mirroring is the most "expensive" in terms of the amount of space that you get for the amount of disks you invest, but let's be honest, at this point, even WD Blacks are really pretty inexpensive for the amount of storage space that you get.

So, what's with the triple mirror?

Well, when I bought my first two drives, I got them at the same time.  It's entirely possible that they are from the same batch - which means that if there was some sort of defect in the batch, one drive might mean that the second drive might fail soon.  That's the Achilles' heel of mirroring - if two drives in one mirror fails, you lose data.  RAIDZ2 (or RAID-6) can lose two drives anywhere in the array and be fine, but if the right two drives in your mirror fail, then you're sunk.  It's back to backups - and you do have backups, right?

So, with two new drives coming in, what we have here is two that might also be from the same batch?  Whatever to do?  That's where the triple mirror comes in.
  1. Add one new drive to the system, reboot, partition the drive.
    To keep track of which drive is which, the following commands are useful:
      gpart list ada0
      camcontrol identify ada0
  2. Add that drive to the zpool via the "zpool attach" command, creating a triple mirror.
      zpool attach {pool} {existing disk} {new disk}
  3. Wait for the resilvering (ZFS-speak for rebuilding a mirror, get it?) to complete.
      zpool status {pool}
    For my 4TB drive, I saw an initial prediction of 8 hours  - and it wound up taking only about 6. That's NOT bad, and one of the reasons that mirroring beats RAIDZ* setups.  Another nice perk of ZFS is that since the RAID is aware of the filesystem, replacing a huge disk with a small amount of data written to it will only require that the data is rewritten.  With a conventional RAID controller, it has no idea what data has been written, so has to rewrite the entire disk.
  4. Remove one of the original drives from the mirror with zpool detach.
      zpool status {pool}
      zpool detach {pool} {disk}
  5. Blank the ZFS config on that drive
    As it turns out, this step is not necessary.  Once you zpool detach the old drive, it's clear enough that FreeNAS doesn't complain when you add it back in.
  6. Reinstall the old drive that was removed with the second new drive.
    Here's a chance to physically rearrange the drives if desired.  I put the first old/new pair in slots 1 and 2 in my NAS, giving them ada0 and ada1, so the next pair was ada2/3.  This isn't necessary, and since FreeNAS uses GPTID, the pools are unaffected.
  7. Extend the zpool with those two drives in a second mirror.
    I did this using the FreeNAS GUI.
  8. Profit!  Start filling up the now larger zpool.
Note that what I wind up with is a 2 mirror zpool in which each mirror has one new drive and one old drive.  Therefore, if there is a problem with either batch of drives, I'm more likely to not lose both of the drives from one mirror.

Wednesday, February 19, 2014

Choosing the best OS for a simple Raspberry Pi server

I'm a big fan of FreeNAS.  I was first exposed to it while evaluating NAS (network attached storage) options at work.  I spent quite a bit of time with it, and the more I used it, the more I liked it.  While it's just a storage OS, the fact that it's based on FreeBSD gives us the ability to run fully functional FreeBSD jails, and this lead me to consider fully replacing my old linux-based home server with my FreeNAS server, using jails for the services outside of the FreeNAS functions, such as DNS, DHCP, mail (postfix) and web services.  This was all well and good until I had my DNS/DHCP server moved to a jail.  When I rebooted the FreeNAS system, I discovered that the system was coming up and looking for addresses before the jail had started - for example, the NTP servers.  While this wasn't a huge issue, it lead me to look at a really lightweight server outside the FreeNAS system for critical services such as DNS and DHCP.  I almost immediately settled on the Raspberry Pi, a simple system on a chip (SoC) which was inexpensive and used very little power.  Not only would it make a good little server, but it's a cool system to play with, so I wound up buying two - my "prod" DNS/DHCP server, and my "dev" server where I could try new stuff.

To be clear, yes, there are many other ways that I could have solved the problem, but here I'm more interested in trying something new and getting my learn on, and the Pi looked like it could be fun to play with.  It's a home network, where you should be free to try cool new stuff just because.

In my selection process for a server OS, I should also mention that I have the most experience with the RedHat derived OS's such as CentOS and Fedora, so I'm most comfortable working in them.  It's not a 100% requirement, but it's nice to have.

The distros I tried:
  • Raspbian
    This is the most common Pi OS, and as such is the best supported, so this is an excellent choice for a server OS as it'll be easy to keep the software up to date, and it's got a wide selection of software to choose from.
    • pros: the most common, the most well known and tested, the most well supported.  Hard to argue with that, so it's where I started.
    • cons: it's a full desktop OS and as such contains tons of unneeded software, and it's not RedHat based.
  • Pidora
    This is Fedora ported to the ARM architecture.  With my professional linux sysadmin background being heavily based in RedHat and Fedora, I was eager to give this one a try.  Reading reviews of it online, however, it seems that this isn't as well supported.
    • pros: Based on Fedora with which I'm more comfortable
    • cons: not as well supported, not reviewed well.  Had it downloaded and installed and it looked like it was based on F18 when 20 is current on the "mainstream" Fedora.
  • RedSleeve
    This one really caught my attention.  I've been running systems with RedHat for years, so I know it well and I like it.  RedSleeve is a port of RHEL to the ARM architecture, so would be perfect for a minimal server OS.
    • pros: an actual server OS, and a truly minimal install from the get-go.  Based on my favorite server OS, so I was immediately comfortable and had it singing happy tunes almost immediately.
    • cons: hand ported by a small group of folks, the latest version is 6.1 (CentOS and RHEL are quite a bit farther ahead) and the primary maintainer said that updates to 6.x will probably be less active as he'll be working on the 7.x release.  While I love the concept, I want something a bit closer to current.
  • FreeBSD link
    Another promising selection since I'm getting more and more familiar with it from my experience with FreeNAS.  Was originally unofficially supported, but is now part of newly released FreeBSD 10.  I was able to get an img file and put that onto a SD card, but when I booted it up, I discovered that it didn't have the pkgng package management kit installed.  It offered to install it, but couldn't find it.  I then had to go with the ports in order to install anything, and downloading and extracting the ports tree took several hours.  I was running on a Class 6 SD card, so that might contribute to the slowness, but it just seems to me that until pkgng is fully supported, FreeBSD really isn't ready for a proper server _quite_ yet - but hopefully it will be soon.
    • pros: another real server OS with a minimal install that looks like it's getting real attention.  
    • cons: not fully official with pkgng support yet - but as it gets more attention it's going to be a real option, I think.
  • MINImal raspBIAN link
    Based on the raspbian, but built from the ground up as a minimal install.
    • pros: minimal install from the ground up, latest packages available.
    • cons: hand built so might not be the latest kernel/etc
  • Raspbian Server Edition link
    Starts with the latest Raspbian, but runs a script to strip out all the unneeded packages
    • pros: based on the most common OS, starts with the latest version.
    • cons: assuming the maintainer of the scripts has the right packages to remove.
As excited as I was when I found Redsleeve, I've gotta say it doesn't look like it's getting the level of attention I'd want for a real server OS, where you want to stay current.  Pidora looks like it's aiming more for the desktop market, so doesn't look like that will be much of an option, either.  So right off the bat my desire for something RedHat derived looks to be unfulfilled.  No big worries, there's really nothing horribly different about the other OS's, and running something Debian based will give me more opportunity to whine about the aptitude tools, which I don't think are quite as good as yum.  Minibian seems really good but since it's custom built, might not be as up to date.  Haven't had a chance to try raspbian server edition yet, but it looks promising as it's a server-oriented minimal install, but what really has my attention is the FreeBSD 10.  I'll keep coming back to that because I have a soft spot in my heart (or my head) for running minimal servers on *BSD stuff harking back to my very first home servers running OpenBSD.  If they get the pkgng issues sorted out, that'll be my choice, but until then, it looks like raspbian really is the best choice for a little RasPi based home server.