Friday, June 19, 2015

Initial Hammerhead One review

Warning! This review is long. You've been warned.

I finally received my Hammerhead One a week or two ago. This thing's been a long time in the making, starting as a kickstarter (or one of those crowdfunding sites, I'm too lazy to go back and look it up) that I was really excited about. If you're not familiar with a Hammerhead One, it's a bicycle navigation device - here's the manufacturer's own video:




Hammerhead One demo video

It's a really, really cool concept, so I was more than happy to back them.

The process took a while. I knew this from the start, but it certainly seemed to go on quite some time. They were good with updates to the process, but I quickly learned that "we think we'll be shipping it by {insert some point in the near future}" statements were not terribly likely. Eventually, I decided "okay, it'll happen at some point, and it'll be a pleasant surprise when it does". Finally, about a month ago I received an update message that said they'd be shipping out soon. It showed up on my porch one day, and I was pretty stoked!


Initial thoughts


  • It's a clean design
    The final shipped design is awfully close to the initial designs, and is really clean. Non-cycling friends can (and have) made jokes about the new sex toy, but aside from that it's a pretty cool looking gadget.

  • No documentation at all in the box?
    The box looks pretty good, but one thing I noticed right away is there's no documentation either in the box or on it. If you're going to ship a device with no form of instructions included, you'd better make damn sure it's super intuitive to use and has no gotchas. Turns out, this isn't the case.

    There was, however, a nice thank-you card which lists the names of all the initial supporters. Yep, I'm on there, along with two friends that I know supported it. Cool.

    The only instructions that I received with the unit came in the shipping email:

    "Once you have unboxed your Hammerhead we suggest you connect it to the app and update the Firmware immediately: This video will show you how to update the firmware, and here is an initial turn instruction guide."

    As this was the shipping email, I initially missed it, and as we'll see, that turns out to be not quite enough.

  • Reusing the Garmin mount is a clever approach
    The mount on the Hammerhead uses a standard Garmin bike computer half-turn mount. It's simple, it works, and opens up the ability to use a bunch of other mounts.

  • Sparse documentation online
    After some initial setup issues, I hit the "Help" link in the app which leads to their online FAQ. They've bolstered it quite a bit since I got mine, but at the time, it was really sparse. The only other thing I managed to find were the videos which we've already discussed.

  • Setup video says iOS, no separate video for Android
    Although the app looks almost exactly the same between the two platforms, the setup video that they linked to in the shipping email clearly says "iOS" in the title. So I look for the Android version, and I don't find it. Good thing I know Android and iOS well enough to be able to translate what they're trying to do in the video. It's not hugely different (beyond it actually working in iOS - but more on that in the next section) but it's still different.

  • No GPX import?
    Playing around with the app, it looks like a fairly basic navigation app. I find I get the best results planning rides using a computer and web-based tools instead of an app on the phone - and then being able to import that route into an app on the phone when it comes time to follow it. This doesn't appear to be an option, and it doesn't even look like I can use another tool and then import that path. I understand that this could lead to issues - if I give it a route that's not on its map, how will it know where to turn, and if I go off-route, how will it figure out how to re-route me? It's probably not a trivial solution, but I sure hope that they figure it out.

Android problems

I have two phones - my own Android (first gen Moto X) and an iPhone 6 from work. This turns out to be a good thing, because about two weeks into having my Hammerhead, I've still yet to get it working with the Android phone. I've discovered a bunch of fun things, and have been seriously frustrated up to this point, but so far their customer support has been pretty good. Here's a rundown of the problems I've discovered and not yet surmounted:


  • You don't pair it with your phone like a normal Bluetooth device.
    If there were instructions, I might have known that before I started using it. Having missed the one paragraph of instructions in the shipping email, I figured "Okay, it's bluetooth. Let's get it paired up and see what this thing can do." So, I went to the phone's Bluetooth setup menu, just like I've done for all the other Bluetooth things that I have, and tried to set it up. This failed with an "invalid PIN" error. I checked the instructions I received for what to do in that situation and oh wait - I didn't get any. Hah.

    Turns out you have to do the pairing in the app. Why? Fucked if I know. I've got theories.

  • If you manage to lock it up, the only way to reset it is leave it alone and let the battery run out.
    In the process of trying to figure out how to get it to talk to the phone (remembering that I missed the one paragraph of instructions with links to videos for a mobile OS that I was not using) I managed to lock it up. The device has one button which is ringed by light when it's on, and at some point this went solid greenish-blue and the device stopped responding at all. The phone wasn't able to see it anymore - it was locked up. With no troubleshooting instructions and nothing of use I could find on the FAQ (which was much less populated than it is now and had almost nothing about connectivity issues) I was left to my own troubleshooting steps. I'm a computer guy, I've got decent troubleshooting skills. The first thing I tried was to press and hold the button, as many electronic devices honor that as a "turn off regardless" indicator. Holding the button down for well over a minute made me think that wasn't the case. Tapping the button did nothing. There are screws on the back of the unit and I considered opening it up - they're just T-6 Torx, which I have - but at this point the thing is brand new and I decided I didn't really want to do that.

    Ultimately, I left it alone and let the battery die off. Once that happened, I could start trying to use it again, until I locked it up again and had to let it die all over again. Speaking with support, I was told that I could reset the unit by "Plug in the USB cable and hold the main button for 6 seconds. Then remove the USB cable and hold the main button for 3 seconds." This procedure kinda sucks since I need to have a USB cable on hand and leaves a bunch of questions unanswered:
    • Do I keep holding the button when I unplug the USB or do I let it go?
    • Does the USB cable have to be plugged into the wall, or would a cable by itself do the trick?
    • Most importantly, why didn't it work with any variation of the procedure that I could think of?

  • Removing the charging port cover isn't intuitive or documented
    The charging port cover is easy to find, at the bottom of the unit on the back. Getting it off is another matter. It looks like it should just slide off away from the bottom of the unit, but trying to do that by hand without forcing it wasn't resulting in anything. There's a tab on the back of the device that looked like maybe it needed to be depressed in order to let the cover slide off, and I tried that. Turns out that's exactly what you don't want to do as it keeps the cover in place and pressing on it increases the lock, not decreases it. I see the FAQ entry has been updated to state this now - but that wasn't there the other day when I tried it. Also, the FAQ entry had one single photo with an arrow on it pointing to the charging port. The problem here is that the photo looks like a quick shot with a cell phone camera in crappy lighting so the arrow points to an area on the phone you could have found on your own (I did) and the relevant point on the unit is a featureless blob of black, so you really don't get any information from the photo which isn't already obvious. See what I mean?
    Crappy white balance and exposure make this photo useless.

    With a T-shirt providing a dark background so it doesn't throw off the auto exposure and sunlight from the window by my desk, I managed to get this photo:
    Less than a minute of work and a much more useful photo.
    I didn't include the cover because I took that off and just left it off.

    I eventually managed to get the cover off by sticking the point of a screwdriver in the space around the tab that holds it in place and pushing it away in the direction I guessed was right.

  • The charging light is hard to see
    When I first let it die off because it was locked up, and then plugged it in, no lights came on. Considering the thing is covered with LEDs, I was kinda surprised about that. No form of indication if it's charging or done? Fail.

    Well, turns out I was wrong. There's a small green LED on the side of the unit which shows when it's plugged in. I had initially missed it. Let's see why:
    The charging light is on. Can you see it?
    See it now? The viewing angle is quite shallow.
    I also don't think the light changes based on the charge level. It's just an indication that it's plugged in. So, is it fully charged? Well, the app can tell you that, but a simple flashing during charge, solid when fully charged would remove the need for the app.

  • The firmware process isn't well documented or intuitive
    The only instructions for upgrading the firmware are on the iOS video. It just shows the procedure happening once and leaves out pretty important details. I managed to assemble the proper procedure from the video, the light sequence demo video, and other information I managed to find online.

  • Lack of information on firmware versions
    Since the only place you can update the firmware is through the phone app, you never have to manually download it, and that's nice. However, there's no way to see what the current released version of firmware might be - or changelogs, or anything like that. As a computer guy, I'm used to things like this. For the general user, not a huge problem, but since they recently uploaded a video mentioning firmware 1.5, and I'm at 1.3.x, I have to assume that it hasn't been released because "Update firmware" is greyed out in the app. Or maybe something's broken. I can't tell.

  • The Android app won't actually talk to my phone
    After finally managing to update the firmware on my device, I tried to use it with my Android phone. I can create route instructions, but when I try to ride it, I get the following error:
    "wait for some time." ?? Dafug?
    Despite "waiting for some time" - it never sees my Hammerhead.

  • The Android app crashes repeatedly
    Related to the above point, when I click "Skip" on the "Connecting to Hammerhead" prompt above, the app closes and I get this:
    Yay, crashed!


  • All of this works on my iPhone
    I guess we know where they spent all their development effort and QA testing.

Test drive on my commute to work

So yes, I know that my first test with this thing should be on a bike, not driving in a car, but time constraints combined with curiosity/impatience led my first test to be my driving commute to work. It's actually not awful because I know the roads between home and work pretty well so should be able to follow pretty close to the route the app suggests, and it'd be interesting to see how it handles things. The key, however, with anything that gives you directions is you should get familiar with it on roads and paths that you _know_ so you learn its quirks - you don't want to find these out blindly following it where you have no idea where you _should_ be going.

For this test,  I had the hammerhead propped up in front of my speedometer so I could see the lights out of my peripheral vision, and the iPhone was in my phone holder so I could see the route it had planned out for me.

  • It really prefers bike paths to streets
    This isn't too surprising, since it's designed first and foremost for giving directions to road bikers. Still, the roads around my house are quite nice and cyclist friendly. Here's the way it suggested I start my ride:
    It's pretty, but a circuitous 2.8 miles.

    Perfectly valid route that's a full mile shorter.
    While the route it suggested is pretty and almost completely avoids any roads, the way I usually go is perfectly fine for cyclists and takes 1 mile off the ride before I've even gotten to Route 202.

  • It prefers a shared-use path to a marked bike lane.
    Similar to above, Route 202 has a shared-use path that runs along side the road, separated by a fence. It's a nice way to go - but 202 also has marked bike paths on the road. If you chose to do that (admittedly, few cyclists do, but I have) it will confuse the app, frequently telling you to turn around and head back when the shared use path goes away from the road too far.
    Route 202 has bike lanes and a separate shared-use path. gmaps

    If you don't already know where you're going, the indication for a U-turn while you're heading the right way on a marked bike lane could be confusing as hell.

    If you do already know where you're going, then you don't really need a blinky thing on your handlebars telling you where to go, do you?

  • The routing was pretty good for a cyclist in the area
    Ignoring wanting to use bike paths over roads and insisting on a U-turn when you don't follow it exactly, the routing was pretty good. There are some tricky areas for bikes on my commute, and it managed to avoid those pretty effectively.

  • It tried to direct me over a bridge that has been closed for a month or two
    I use the Waze app for driving. It's a community-supported app that allows you to report problems on the road, including road closures. One of the routes I take to work has a bridge which is currently closed for repairs. I was able to tag the bridge as unavailable in the Waze app so that other folks knew to not go that way.

    Unfortunately, Hammerhead's map doesn't know this, and tried to direct me over the same bridge. When I took my usual detour around the closed bridge, the Hammerhead was diligently instructing me to make a U-turn for well over a half mile before it finally figured out the way I was actually going.

  • The amount of pre-turn warning that you get seems to be based on distance from the turn
    Okay, this is kinda a bullshit observation since I was in a car, going far faster than the app was designed for, but the turn notifications I was getting happened *very* close to the turn. At bike speed, that'd be just fine, but adjusting the amount of warning based on the speed doesn't seem like it'd be terribly difficult. More of an observation than a real issue.

  • Several indicated turns that weren't actual turns.
    Driving down a section of road where I had well over a mile to the next turn, it kept indicating left turns. Sometimes those turns seemed to be nothing more than a bend in the road. Sometimes it indicated a left turn on a straight stretch of road where there was nothing beyond a driveway on the left. If I didn't know where I was going and didn't have the map up, that would have been confusing as HELL. Problem is, not knowing where I'm going and not having a map up is exactly the targeted use case for this device.

    I should note that these were indicated left turns - not the slight turns shown in the video. I did see one slight right on my route, which was accurate.

Conclusion - so far

I still think the Hammerhead is a great idea and a slick design. It shows real promise. However, I can't help but wonder what the hell they did with 20 months of development time. I can't imagine they offered these to too many folks for beta testing... or is that us, the initial backers? I can't imagine how they managed to go 20 months of development without writing any type of useful doc, or taking more than a single crappy cell phone camera when a simple lightbox setup with a good SLR would have been so much more effective. It's just been a really frustrating first experience with the device.

Given some time, I think that it still shows huge promise, and I'm not giving up on it yet, but I certainly don't think it's ready for the big time, and I sure won't be using it alone to figure out how to get somewhere. It needs a bunch of work before I'll be ready to trust it that far.

Wednesday, June 3, 2015

VirtualBox autostart on boot

I run a number of VMs inside VirtualBox on a server - while I've got VMware, I'm more familiar with VirtualBox from working with it for Windows VMs on my mac, as well as development using Vagrant and so forth... so I use VirtualBox for some small test systems. Also, I'm told VMware has specific hardware requirements, and although I've got a stack of enterprise-level servers, they're noisy and generate a surprising amount of heat. In order to avoid having to set up the air conditioning in my computer room, I run my VMs on a decent desktop-class system which is quiet and generates much less heat. Decent trade-off.

After rebooting I discovered my VMs weren't running, so I took a look online to figure out how to do this, and many places pointed to the same blog post:

http://lifeofageekadmin.com/how-to-set-your-virtualbox-vm-to-automatically-startup/

While the procedure worked, it wasn't perfect, so I'm posting my modified version of the procedure here, mostly for my own reference. (run these commands as the user who owns the VBox VMs which will be autostarted.)
  1. Create the file /etc/default/virtualbox with the following contents:

    sudo bash -c 'cat << EOF > /etc/default/virtualbox
    # virtualbox defaults file
    VBOXAUTOSTART_DB=/etc/vbox
    VBOXAUTOSTART_CONFIG=/etc/vbox/autostart.cfg
    EOF'

    *NOTE:
    Don't use vbox.cfg as found in the above link. Apparently some VirtualBox scripts look for that file for other purposes.

  2. Make sure your user is a member of the vboxusers group:

    if ! groups | grep -w nobody; then sudo usermod -aG nobody $LOGNAME; echo Log out; fi


    If you see "Log out" as the output of the above command, log out and back in so your group membership is updated. If you're okay, you should see the output of the groups command.

  3. Create the /etc/vbox directory and the autostart.cfg file:

    sudo mkdir -p -m 1775 /etc/vbox
    sudo chgrp vboxusers /etc/vbox
    sudo bash -c "cat << EOF > /etc/vbox/autostart.cfg
    # Default policy is to deny starting a VM, the other option is "allow".
    default_policy = deny
    # Create an entry for each user allowed to run autostart
    $LOGNAME = {
      allow = true
    }
    EOF"


    You set the sticky bit (the "1" in the "1775" to keep users in the vboxusers group from deleting the files aside from their own. This is how the /tmp directory works - 1777 and users can only delete their own files.

  4. Set the autostart property via VBoxManage

    vboxmanage setproperty autostartdbpath /etc/vbox


  5. Modify the appropriate VM with autostart enabled:

    vboxmanage modifyvm Graphite --autostart-enabled=on


  6. Restart the vboxautostart-service"

    systemctl restart vboxautostart-service

  7. Restart your system and confirm that the VMs have started:

    vboxmanage list runningvms

Thursday, April 23, 2015

Logging FreeNAS performance data into Graphite

Update 12/23/2015 - I now have an updated post which supercedes this post.

Update 12/2/2015 - This information is dated, and there's a really good way to handle FreeNAS logging to Graphite with FreeNAS 9.3 that I need to document. I'll update this post with a link once I get that post done. In the meantime, this is just a placeholder.

FreeNAS is a great NAS appliance that I have been known to install just about anywhere I can. One of the things that makes it so cool is the native support for RRD graphs which you can view in the "Reporting" tab. What would be cooler, though, is if it could log its data to the seriously awesome metrics collection software Graphite. I've been working on getting Graphite installed at work, and have always wanted metrics collection at my house (because: nerd) and so once I got a Graphite server up and running, one of the first things I did was modify my FreeNAS system to point to the Graphite server.

Here are the steps I followed on FreeNAS 9.2.1.8.  I'm overdue for FreeNAS 9.3 and once I've done the upgrade, I'll update these instructions as necessary.


  1. Install a Graphite server.  Four little words which sound so easy, but mask the thrill and heartbreak that can come with trying to accomplish this task. I tried several guides and was a little daunted when I saw most of them mentioning how much of a pain in the ass installing Graphite can be, but then I managed to find this nice, simple guide that used the EPEL packages on a CentOS box. Following those instructions, I managed to get two CentOS 6 boxes up and running in pretty short order, and then with some slight modifications, I set up a CentOS 7 graphite server at home.
  2. Make sure port 2003 on your Graphite box is visible to your FreeNAS box. This usually involves opening some firewall rules.
  3. SSH into your FreeNAS box. (If you don't know what this means, you probably never got to this step, as "Install a Graphite server" would have entirely broken your will to live.) You will also need to log into your FreeNAS box as either root, or a user that has sudo permission.
  4. Edit the collectd config file:
    1. sudo vi /etc/local/collectd.conf
    2. At the top of the file, change "Hostname" from "localhost" to your hostname for the NAS. Otherwise, your NAS will report to the Graphite host as "localhost", and that's less than useful.

      Hostname "nastard"
      ...
    3. There is a block of lines all starting with "LoadPlugin". At the bottom of this block, add "LoadPlugin write_graphite":

      ...
      LoadPlugin processes
      LoadPlugin rrdtool
      LoadPlugin swap
      LoadPlugin uptime
      LoadPlugin syslog
      LoadPlugin write_graphite

      ...
    4. At the bottom of the file, add the following block, substituting the hostname for your graphite hostname:

      ...

       
          Host "graphite.example.net"
          Port "2003"
          Protocol "tcp"
          LogSendErrors true
          Prefix "servers."
          Postfix ""
          StoreRates true
          AlwaysAppendDS false
          EscapeCharacter "_"
       
  5. Change to the directory "/var/db/collectd/rrd" - this is where FreeNAS logs its RRD metrics that is visible in the GUI. If we just restart collectd, it's going to start logging under the hostname (since we changed that above) and that'll break the UI's RRD graphs. While we'll still have the data in graphite, we can have our graphite cake and RRD eat it too by doing the following steps.
  6. Shut down collectd:

    sudo service collectd stop
  7. Move the "localhost" directory (under /var/db/collectd/rrd) to whatever you set the hostname to in the collectd.conf above:

    sudo mv localhost nastard
  8. Symlink the directory back to "localhost":

    sudo ln -s nastard localhost
  9. Restart collectd:

    sudo service collectd start
  10. That's it! At this point, you can reload the FreeNAS GUI and see that you still have your RRD data, but more importantly, if you go to your Graphite GUI, you'll see that you should now be getting metrics.

Protips:

  • Collectd writes data every 10 seconds by default. If you write all your collectd data with the "servers." prefix as I've shown above, you can make sure your whisper files are configured for this interval with the following block in your /etc/carbon/storage-schemas.conf:

    [collectd]
    pattern = ^servers\.
    retentions = 10s:90d,1m:1y

    This will retain your full 10s metrics for a month (30d), 1 minute interval metrics for a year. That config results in each whisper file being 15MB, and with my NAS config (with 6 running jails) I have 220 whisper files for a total disk space of 1.2G. Considering disk space is pretty cheap, you could easily adjust these numbers up to retain more data for longer.  You should also read up on the graphite aggregator which controls how the data is parsed down when it's saved at lesser intervals.

    Thanks to Ben K for pointing out that more than one or two aggregations will greatly increase the amount of disk access. Initially I had a four stage aggregation, but that would require a crapload of access happening with each write. Since Graphite is very IO intensive to begin with, that's not a good idea.

Saturday, January 31, 2015

The meaning of the size of directories in long ls listing

One thing folks new to linux run into is what it means when they see a size associated with a directory in a long listing, and many assume that it represents the amount of space taken up by files in the directory - but that quickly becomes apparent that is not the case. So, to explain the directory size as it shows up in ls output, let's have an example:

24-rocket:~> mkdir derp
25-rocket:~> ls -l derp
total 0
26-rocket:~> ls -ld derp
drwxr-xr-x 2 cmh cmh 4096 Jan 30 17:44 derp

I've created a directory called "derp" and we can see that it's 4096 bytes - 4kb.

27-rocket:~> for x in {1..10000}; do touch derp/$x; done
28-rocket:~> ls derp | wc -l
10000

I now created 10,000 empty files in derp - none of them contain anything.

29-rocket:~> ls -ld derp
drwxr-xr-x 2 cmh cmh 155648 Jan 30 17:44 derp

Notice the size of derp is now larger - because derp contains the info for those files, such as filenames, inodes, etc. Interestingly enough, zero-byte files don't take up any disk space except their entry in the directory itself.

30-rocket:~> rm derp/????
31-rocket:~> ls derp | wc -l
1000

I just removed all of the four-digit files from derp, so that would be 1000-9999. We now have only 1000 files in derp.

32-rocket:~> ls -ld derp
drwxr-xr-x 2 cmh cmh 155648 Jan 30 17:45 derp

However, even though we've gotten rid of most of the files in the directory, it's the same size. Directories don't shrink. (At least not that I know of.)

One way to think of directories is that they're just files - special files that contain info about other files.That's why a zero byte file takes no space on the disk - it's just an entry in the directory file. Once you put even a single byte into a file, then there's disk space being used.

In order to get the amount of disk space used in a directory, you'll want to use the "du" command. To see the size of a single directory, run this command:

du -hs {directory}

In order to show the size of all subdirectories and files in the current directory, sorted by size, use:

du -ms * | sort -n

 Replacing the -h (human readable output) with -m (output in megabytes) makes all the numbers consistent so things sort properly. Otherwise, 900kb looks like more than 4tb - and that's not quite the case!


Thursday, January 15, 2015

Remote sudo with reasonably secure supplying of password

I've recently been exposed to some scripts where the approach to getting info requiring root access on a remote box was to run ssh via an expect script, then run a sudo su -, supply the password, and run commands. No checking of the output of commands was run, the expect script just waited for the very last character of the prompt to shovel in more shit.

It kinda made me feel like this.
Did it work? Sure... but I still didn't think this was a great idea for about as many reasons as I listed steps. Probably most of you would agree. Plus, the output was then everything you'd get from an interactive session, so they had to use a pipeline of greps and awks to get the data they wanted.

It hurt my soul to look at it. Not only was it sloppy and dangerous, it was just inefficient and ugly as shit. Even when the output was a single line of exactly what they wanted, they actually had to add a tag to that line so their grep could find it - and then strip the tag.


So I was curious - what's the best way to supply a password to a remote system when you have ssh access and sudo - but not NOPASSWD sudo?  Also, as a bonus, I want to keep the password out of plain visible text (especially in ps output) as much as possible. I never want my password in a command line, I never want it in a file, and I never want it appearing anywhere where it could possibly be read. (but I'm funny like that)

Here's what I came up with which might be useful for you.

user=cmh
host=myhost.example.com
read -s -p "Enter the password for $user@$host: " MYPASS
sn=$(echo -E $MYPASS | \
    ssh $user@$host 'sudo -Sp "" /usr/sbin/dmidecode -s system-serial-number')

First we set the username and remote hostname in vars. Then, the third line is a read command which suppresses echoing the input (-s) and specifies a nice prompt with the -p option, putting the input into a var called "MYPASS".

The last line assigns the output of a command substitution (the "$(...)" ) to a variable "sn". The substituted command echoes the value of MYPASS with escape chars disabled. (-E) It then pipes that password to the ssh which connects to the remote host, running the sudo command with the option to take the password via STDIN (-S) and to nullify the prompt (-p "") and running the command "/usr/sbin/dmidecode -s system-serial-number".

Obviously getting the remote system's serial number is just one thing, but it's a good example.

One sudo, one command run, no interactive shells, and it returns exactly what you want. Easy peasy, clean, and elegant.

Notes/Limitations/Dire warnings:
  • If you had to supply STDIN to the remote command, you might be out of luck, since you're supplying the password to sudo. Interestingly, though - this works:
    echo -e "$MYPASS\nderp" | \
        ssh $user@$host 'sudo -Sp "" /bin/bash -c "cat > /tmp/derpina"'

    My password goes to sudo - and then the remainder of the STDIN, in this case the word "derp" - goes to the bash command, which creates a file /tmp/derpina containing that text. This is starting to border on ugly, though, and if sudo doesn't want a password (your account has NOPASSWD set for the command, for example) that password goes right on through to the command... and that's horribad.
  • Lemme just reiterate what's said above - if sudo doesn't look for a password - even though you told it to use the -S option, it ignores it and passes that password right on through to the remote command. That could be suboptimal.
  • Careful with the quoting. In the example I used single quotes because I wasn't expanding any vars, but if I had to - supplying options, for instance - I'd have to use double quotes, at which point I'd have to change the sudo's "-p" option to use single quotes - or backslash escape the double quotes. Shell quoting isn't tricky if you understand it, but can be a beast if you don't. (learn your shell quoting rules)
  • Obviously the ssh works best if you have public key authentication set up. However, even if you don't, the pipeline doesn't interrupt standard SSH password input. I've tested that this still works if the remote system asks for a password. Yes, you then need to enter the password twice. (so, use public key auth) You might be able to use SSH_ASKPASS to supply the password, but that's another topic altogether.
  • You're storing your password in a shell variable. On the upside, even if you were to export the variable, it wouldn't show up in the /proc/{pid}/env for your shell - but it would for subshells. You'd need root or be the same user to read those anyway.
  • As bad as the script was - I did learn about dmidecode's -s option. Previously I'd been using dmidecode -t 1 and processing the output. This illustrates two things:
    • You can learn something useful from other folks, even if it's cleverly hidden amongst shitty work.
    • Manpages are your friend. Read them, and go back to them often, especially when you're scripting with those commands. Especially for commands you "know" really well.
  • You could skip the echo (and any issues with options) by using a here string:
       
    ssh $user@$remote .... <<< "$MYPASS"
    This would probably avoid potential problems with backslashes or other odd characters.
Other limitations? Any better ways that I'm missing?

Monday, February 24, 2014

Move large files with a progress bar

Sometimes I want to move around large files, and instead of just sitting there and wondering how they're going, I like to see a progress bar along the lines of an scp or rsync with the --progress option.  If you have the "pv" utility installed, this is quite easy.  The following command works with bash, ksh, or zsh:

for file in *; do echo $file; pv $file > /path/to/destination/$file && rm $file; done

Formatted for use in a script:

for file in *
do
    echo $file
    pv $file > /path/to/destination/$file && rm $file
done

Sunday, February 23, 2014

Triple mirroring on FreeNAS


I just set up a triple mirror on my home NAS:

NAME              STATE     READ WRITE CKSUM
sto               ONLINE       0     0     0
 mirror-0         ONLINE       0     0     0
   gptid/de1e...  ONLINE       0     0     0
   gptid/de63...  ONLINE       0     0     0
   gptid/c8a3...  ONLINE       0     0     0  (resilvering)



Triple mirror!  Three drives serving up the same exact one drive's worth of data, what is this insanity?  Paranoid much?  No, not really, it's step 1 of an upgrade.  I've been over the recommended 80% usage on my primary zpool for a couple weeks now, but with no "day" job, didn't think purchasing new drives was the best idea.  Well, I got some good news the other day, and I promptly celebrated by buying the drives I needed.  They arrived over the weekend, so now it's time to start the upgrade.

My original configuration was (2) 4TB drives in a simple mirror.  Many other folks set up their zpools as a RAIDZ (ZFS equivalent of a RAID-5) or a RAIDZ2 (ZFS version of RAID-6 with two parity disks) in order to get the most out of their storage, but I decided to keep my configuration simple and go with mirroring for a couple reasons:

  1. Performance.  Mirroring generally performs better than parity-based RAID setups as there's no math involved.  Does a home NAS need that much performance?  Probably not, but it's a nice perk.
  2. Ease of upgrades.  This is the real driving concern.  One of the limitations of ZFS is that you can expand, but you can only expand with a vdev similar to what you had before - or you can start completely from scratch.  With a mirror, you have the simplest form of a vdev, with two disks, so each time I want to upgrade (and you know upgrades always happen) I can upgrade two disks at a time.  If I went with a basic 3-drive RAIDZ, I would have to buy 3 more drives to add on another 3-drive RAIDZ to the zpool.  If, like some of my friends, I ponied up for a 5 or 6 drive setup -- my next upgrade would be 5-6 drives at a time, and suddenly I'm looking for a case that can handle that many.  So, sticking with 2 drives in a mirror allows me to add another two drives each time I'm ready for an upgrade.  Yes, mirroring is the most "expensive" in terms of the amount of space that you get for the amount of disks you invest, but let's be honest, at this point, even WD Blacks are really pretty inexpensive for the amount of storage space that you get.

So, what's with the triple mirror?

Well, when I bought my first two drives, I got them at the same time.  It's entirely possible that they are from the same batch - which means that if there was some sort of defect in the batch, one drive might mean that the second drive might fail soon.  That's the Achilles' heel of mirroring - if two drives in one mirror fails, you lose data.  RAIDZ2 (or RAID-6) can lose two drives anywhere in the array and be fine, but if the right two drives in your mirror fail, then you're sunk.  It's back to backups - and you do have backups, right?

So, with two new drives coming in, what we have here is two that might also be from the same batch?  Whatever to do?  That's where the triple mirror comes in.
  1. Add one new drive to the system, reboot, partition the drive.
    To keep track of which drive is which, the following commands are useful:
      gpart list ada0
      camcontrol identify ada0
  2. Add that drive to the zpool via the "zpool attach" command, creating a triple mirror.
      zpool attach {pool} {existing disk} {new disk}
  3. Wait for the resilvering (ZFS-speak for rebuilding a mirror, get it?) to complete.
      zpool status {pool}
    For my 4TB drive, I saw an initial prediction of 8 hours  - and it wound up taking only about 6. That's NOT bad, and one of the reasons that mirroring beats RAIDZ* setups.  Another nice perk of ZFS is that since the RAID is aware of the filesystem, replacing a huge disk with a small amount of data written to it will only require that the data is rewritten.  With a conventional RAID controller, it has no idea what data has been written, so has to rewrite the entire disk.
  4. Remove one of the original drives from the mirror with zpool detach.
      zpool status {pool}
      zpool detach {pool} {disk}
  5. Blank the ZFS config on that drive
    As it turns out, this step is not necessary.  Once you zpool detach the old drive, it's clear enough that FreeNAS doesn't complain when you add it back in.
  6. Reinstall the old drive that was removed with the second new drive.
    Here's a chance to physically rearrange the drives if desired.  I put the first old/new pair in slots 1 and 2 in my NAS, giving them ada0 and ada1, so the next pair was ada2/3.  This isn't necessary, and since FreeNAS uses GPTID, the pools are unaffected.
  7. Extend the zpool with those two drives in a second mirror.
    I did this using the FreeNAS GUI.
  8. Profit!  Start filling up the now larger zpool.
Note that what I wind up with is a 2 mirror zpool in which each mirror has one new drive and one old drive.  Therefore, if there is a problem with either batch of drives, I'm more likely to not lose both of the drives from one mirror.