Friday, January 22, 2016

New blogging platform

I've been using blogger for a long time, and it's okay, but it's really, really annoying when I try to post code snippets. Manually formatting the text as courier and wishing I could make a block around it... it's just too much work. I want to just write, not have to write HTML or install plugins which don't work or anything like that. So, I looked at a bunch of different blogging platforms:


  • Wordpress
    Several folks suggested this, several others suggested against it because their security is... not great. I tried it, and found it to be really annoying, and wound up ragequitting before I even made a post.
  • Codrspace
    This looked really good, handles code blocks really easily, but it still looks kinda early. I couldn't find any table of contents for my posts, and didn't see any way to make comments. Close, but not quite there.
  • Medium
    Brought to you by the same folks who did Instagram. Tried it out, and it looks very social media-y, but it hits all the things I want - simple UI, code blocks, drag-and-drop for images, comments, all the stuff. It's maybe not perfect, but it's perfect enough.
So, it looks like any new posts will be going over here:

Monday, January 4, 2016

Adding code blocks to posts

One of the things I hate about this blogging platform is there's no easy way to do a code block, and support for entering code is piss poor. Tried the instructions on this blog post and it seems to work, although I don't like that I have to dick around with the HTML.

echo none > /sys/class/leds/led0/trigger
echo 1 > /sys/class/leds/led0/brightness
sleep 1
echo 0 > /sys/class/leds/led0/brightness

It's a pain in the fucking ass, you have to edit the HTML and add exactly the right "pre" tags around your code, but at least there's something. I'm still looking for a blogging platform which provides:

  • Easy GUI editor
  • I don't have to host anything
  • Ability to easily handle code
  • Ability to include images without having to copy locations
So far this seems to be a trivial list but I'm not finding anything that's really winning me over.

Thursday, December 24, 2015

Getting started with data logging on the Raspberry Pi

I've got a friend who just got a Raspberry Pi and wants to try doing some projects. One of the first things that he wants to do is track temperature and humidity in his house, which is a really good place to start because it's not TOO difficult. Great place to get your feet wet playing with "physical computing".

So, he got the Pi, got an OS on it, got it booted up, and then said "I have no idea what to do with it." So I thought a little, and realized, if the goal is temp logging, there's a really, really easy place to start.

I sent him this:

Open a terminal and copy this into a file called "log-cpu-temp": (do you know vi? if not, use nano)

#!/bin/bash

eval $(date +'now=%s date=%y%m%d')

echo "cpu.temp,$(< /sys/class/thermal/thermal_zone0/temp),$now" > \
    $HOME/cputemp-$date.csv


Then, make it executable:

chmod +x log-cpu-temp

Test it by running it:

./log-cpu-temp

It should create a file in your home directory cputemp-151223.csv containing the current CPU temperature in millideg C - something along the line of "35780" which means 35.780 deg C.

Once it's running, then add it to your crontab:

crontab -e

and add to the bottom:

* * * * * /home/pi/log-cpu-temp

Once you save and exit, it'll log the CPU temperature to the CSV file every minute.

Welcome to data logging!

Wednesday, December 23, 2015

Update on using Graphite with FreeNAS

A while back, I posted on using Graphite with FreeNAS. Well, there have been some changes with the recent versions, and this makes integrating Graphite with FreeNAS even easier, so it's time for an update. This applies to FreeNAS-9.3-STABLE.

FreeNAS collects metrics on itself using collectd. This is a nice program which does nothing but gather metrics, and gather them well. FreeNAS gathers basic metrics on itself - cpu, disk performance, disk space, network interfaces, memory, processes, swap, uptime, and ZFS stats, and logs it to RRD databases which can be accessed via the Reporting tab. However, as nice as that is, I much prefer the Graphite TSDB (time-series database) for storing and displaying metrics.

Previously, I was editing the collectd.conf directly, but since the collectd.conf is dynamically generated, and I'd have to add the same block of code each time that happened, I decided to move my additions to the collectd.conf into files stored on my zpool. I then just use the include directive added to the end of the native collectd.conf to call out those files. So, at this point, all I add to the native collectd.conf is this line:

Include "/mnt/sto/config/collectd/*.conf"

This makes my edits really easy, and allows me to create a script to check for it and fix it if necessary - more on that later.

In the /mnt/sto/config/collectd/ directory, I have several files - graphite.conf, hostname.conf, ntpd.conf, and ping.conf.

The graphite.conf loads and defines the write_graphite plugin:

LoadPlugin write_graphite
<Plugin "write_graphite">
  <Node "graphite">
    Host "graphite.example.net"
    Port "2003"
    Protocol "tcp"
    LogSendErrors true
    Prefix "servers."
    Postfix ""
    StoreRates true
    AlwaysAppendDS false
    EscapeCharacter "_"
  </Node>
</Plugin>

It's worth mentioning that some of the other TSDBs out there accept Graphite's native plain-text format, so this could be used with them just as well. Or, if you had another collectd host, you could use collectd's "network" plugin to send to those.

The hostname.conf redefines the hostname. The native collectd.conf uses "localhost", and that does no good when logging to a graphite server which is receiving metrics from many hosts, so I force it to the hostname of my FreeNAS system:

Hostname "nas"

In order for this to not break the Reporting tab in FreeNAS (not that I use that anymore with the metrics in Graphite) I first need to move the local RRD databases to my zpool by chcking "Reporting Database" under the "System Dataset" in the "System tab:



I then go to the RRD directory, move "localhost" to "nas", and then symlink nas to localhost:

lrwxr-xr-x   1 root  wheel       3 May 19  2015 localhost -> nas
drwxr-xr-x  83 root  wheel      83 Dec 20 10:23 nas

This way, redefining the hostname in collectd causes the RRD data to be written to the "nas" directory, but when the GUI looks for the "localhost" directory, it still finds what it's looking for and displays the metrics properly.

The ntpd.conf enables ntpd logging, which I use to monitor the time offsets on my FreeNAS box on my Icinga2 monitoring host:

LoadPlugin ntpd
<Plugin "ntpd">
        Host "localhost"
        Port 123
        ReverseLookups false
</Plugin>


Finally, ping.conf calls the Exec plugin to echo a value of "1" all the time:

LoadPlugin "exec"
<Plugin "ntpd">
  Exec "nobody:nobody" "/bin/echo" "PUTVAL nas/collectd/ping N:1"
</Plugin "ntpd">


I use this on my Icinga2 server to check the health of the collectd data, and have a dependency on this check for all the other Graphite-based checks. This way, if collectd breaks, I get alerted on collectd being broken - the actual problem. This prevents a flurry of alerts on all the things I'm checking from Graphite, which makes deciphering the actual problem more difficult.

So, I define the Graphite writer, I change the hostname so the metrics show up on the Graphite host with the proper servers.nas.* path, and I add two more groups of metrics to the default configuration. These configuration files are stored on my zpool, so even if my FreeNAS boot drive craps out (which actually happened last week) and I have to reload the OS from scratch, I don't lose these files.

Since I'm only adding one line to the bottom of the collectd.conf file, it becomes very easy to check for my additions, and if necessary, add them. I have a short script which I run via cron: (the "Tasks" tab in the FreeNAS GUI)

#!/bin/bash

# Set the file path and the line I want to add
conf=/etc/local/collectd.conf
inc='Include "/mnt/sto/config/collectd/*.conf"'

# Fail if I'm not running as root
if (( EUID ))
then
  echo "ERROR: Must be run as root. Exiting." >&2
  exit 1
fi

# Check to see if the line is in the config file
if grep -q Include $conf
then
    : All good, exit quietly.
else
    : Missing the include line! Add it!
    echo "$inc" >> $conf
    service collectd restart
    logger -p user.warn -t "collectd" \
         "Added Include line to collectd.conf and restarted."

    echo "Added include to collectd.conf" | \
         mail -s "Collectd fixed on NAS" mymyselfandi@example.com
fi


If I reboot my FreeNAS system, the collectd.conf gets reverted. Not a huge problem if I can wait no more than 30 minutes for my cron job to run, but in 9.3, I can do even better. I can call the script at boot time as a postinit script from the Init/Shutdown Scripts section of "Tasks":

 

This way, when I boot the system, it runs the check script, which sees the missing Include line, adds it automatically, and restarts collectd so it resumes logging to my Graphite server.

This setup has proven to be wonderfully reliable, and unless/until native Graphite support is added to FreeNAS, should keep on working.

Wednesday, November 18, 2015

How to use tip tinner

So a buddy who knows things about how to solder told me I had to get tip tinner. Perfect, I got R&R Lotion tip tinner. (fun side note - when you get a notification on your phone that your "R&R Lotion..." shipped, it might not be what first comes to mind.) So I got it, it comes with no instructions. Looks pretty simple, but hey, I don't know what I'm doing, and I tend to not just guess, especially when I just shelled out good money for a nice adjustable soldering station. So, I search, and I found this, the best and simplest guide to using tip tinner that I found. It really is pretty easy.

Tuesday, October 20, 2015

Logging output from a DHT22 temp/humidity sensor to Graphite via collectd on the Raspberry Pi

Update: 11/29/2015 - Since I initially wrote this, I'm deciding that this should be in the "just because you can, doesn't mean you should" category. I am finding it much easier to query the sensor and then submit the metrics to Graphite directly using the plaintext protocol. You can do it with collectd, but it just introduces far too many complications to really be worth it.

This is a bit of a fringe case, but if I don't write down what I just learned, I'll totally forget it. I got a DHT22 temperature and humidity sensor, and unlike the 1-wire DS18B20 temperature sensor, there isn't a convenient kernel module so I can't just read from the /sys filesystem. Thankfully, Adafruit has a nice guide to using the DHT22 with the Raspberry Pi, and they've got a GitHub repository with the code needed to query the sensor.

Of course, due to my love of Graphite, I need to immediately get my DHT22 not only working, but logging to Graphite, because METRICS. (funny how that word used to annoy me) I could simply modify the Adafruit code to output in Graphite plaintext format, but since I use collectd for gathering my host-based metrics anyway, let's have it do the work and submit everything to graphite together.

I could modify the python script to output in the collectd format, and call that with the Exec plugin, but since the python code needs to be run as root, I decided to keep it pretty minimal, and write a shell wrapper around it, because I know shell. ^_^

The biggest problem I ran into was getting my data to log, even though the script was working, and I discovered this limitation: with the exec data format, the identifier - the path of the metric being collected - has to have a very specific format: hostname/exec-instance/type-instance. "hostname" is pretty obvious, and is defined by COLLECTD_HOSTNAME. (as documented in the Exec plugin docs) exec-instance just has to be a unique instance name, and with this being the only exec plugin I'm running, uniqueness is easy. The last entry, type-instance has to have "type" being a valid type as defined in /usr/share/collectd/types.db, and "instance" again is any unique name. Once I changed my metric path identifier to match this standard, my stuff started logging.

Here's my modified Adafruit python script to gather the data from the DHT22:
https://github.com/ChrisHeerschap/lakehouse/blob/master/dht.py

Here's the shell script wrapper called by collectd:
https://github.com/ChrisHeerschap/lakehouse/blob/master/dht-collectd

And, here's what it looks like when the data gets into Graphite, referencing the DHT22 against a DS18B20 1-Wire sensor:


Monday, August 31, 2015

Creating an access jail "jump box" on FreeNAS

If you wish to have external access to your network through SSH, it's a very good idea to have a very limited purpose "jump box" with the only external access, with that then tightly limited as to whom can log into it and what they can do when they get there. Here is what I've developed using a jail on a FreeNAS system.

I've stolen some ideas from DrKK's Definitive Guide to Installing OwnCloud in FreeNAS (or FreeBSD)

  1. Start with the latest version of FreeNAS. I'll leave it up to you to figure that part out.
  2. Create a standard jail, choose Advanced mode, make sure the IP is valid, and uncheck "VIMAGE"
  3. Log into the jail via "jls" and "jexec"
    jls
    sudo jexec access csh
  4. Remove all installed packages that aren't the pkg command:
    pkg info | awk '$1 !~ /^pkg-/ {print $1}' | xargs pkg remove -y
  5. Update installed files using the pkg command:
    pkg update
    pkg upgrade -y
    pkg will likely update itself.
  6. Install bash and openssh-portable via the pkg command:
    pkg install -y bash openssh-portable
     
  7. Move the old /etc/ssh directory to a safe place and create a symlink to /usr/local/etc
    mv /etc/ssh /etc/oldssh
    ln -s /usr/local/etc/ssh /etc/ssh
    NOTE: this step is purely for convenience and is not necessary but may avoid confusion since the native ssh files won't be used.
  8. Make sure your /usr/local/etc/sshd_config contains at least the following:
    Port 22
    AllowGroups user
    AddressFamily inet
    PermitRootLogin no
    PasswordAuthentication no
    PermitEmptyPasswords no
    PermitUserEnvironment yes
  9. Enable the openssh sshd and start it:
    echo openssh_enable=YES >> /etc/rc.conf
    service openssh start
  10. Verify that openssh is listening on port 22:
    sockstat -l4 | grep 22
  11. Create the users' restricted bin directory:
    mkdir -m 555 /home
    mkdir -m 0711 /home/bin
    chown root:wheel /home/bin

    This creates the directory owned by root and without read permission for the users.
  12. You can create symlinks in here for commands that the users will be allowed to run in their restricted shell. I prefer to take this a step farther - since it's only a jump box, its only purpose is to ssh in, and ssh on to another system. I further restrict this by creating a shell script wrapper around the ssh command which restricts the hosts that the user can login to from the jump box.

    If you have half a clue, you'll wonder how this prevents them from ssh'ing to another host when they get to one that they are allowed access to, and the answer is, if they have the permissions on that host - it doesn't. So it's not a fantastic level of security, but I wanted to see if I could do it. You'll also notice that you need to create a file /home/bin/sshauth.cfg which has the format of "username ALL" or "username host1 host2 ..." which dictates access.
  13. Symlink in the "logger" command to the /home/bin directory:
    ln -s /usr/bin/logger /home/bin
  14. Create the user group "user" (as called out in the sshd_config above) so the users can log in:
    pw groupadd user
  15. Create the users with each home directory under /home, with the shell /usr/local/bin/rbash, no password based authentication, and the group created in the previous step.
    adduser
  16. Change to the user's home directory and remove all the dot files
    cd /home/user
    rm .??*
  17. Create the following .bash_profile in the user's home directory:
    export PATH=/home/bin
    FROM=${SSH_CLIENT%% *}
    logger -p user.warn -t USER_LOGIN "User $LOGNAME logged in from $FROM"
    export HISTFILE=/dev/null
    [[ $TERM == xterm* ]] && echo -ne "\033]0;JAIL-$HOSTNAME\007"
    PS1="\!-$HOSTNAME\$ "
  18. The file permissions should be set, but confirm:
    chmod 644 .bash_profile
    chown root:wheel .bash_profile
  19. Create the ssh directory and give it to the user:
    mkdir -m 700 .ssh
    chown user:user .ssh
  20. Install the user's authorized_keys file in the ssh directory, and make sure the permissions are right:
    chown user:user .ssh/authorized_keys
    chmod 600 .ssh/authorized_keys
  21. Your user should be able to login at this point, and do nothing beyond what you've given them access to in the /home/bin directory.