Oneliners: making binary flashcards

I'm cramming for my CCNA test so I need to know things like binary exponents and subnet masks down cold. So I'm using Anki to drill and memorize. Here's a quick one liner to generate the powers of 2 in a shuffled semicolon-separated list backwards and forwards: ::(1..25).map{|n| ["#{2**n}=2^?; #{n}\n", "2^#{n}=?; #{2**n}\n"]}.flatten.collect.sort_by { rand }.each{|n| print n}

Round I, FIGHT!

I have a ReadyNAS box with ~600GB in it. It's full and because of some quirks with the way the firmware upgrade process works I have to start from scratch if I want more than ~1.5TB in a single volume. So I picked up a couple of 750GB hdds and planned to do the obvious copy/format/copy loop with the newest firmware and a fresh disk volume. Problem: The old firmware had some bugginess with network shares. It would occasionally hiccup and freeze the stream. This isn't a problem for using the device normally, the hiccups were small enough to be caught in the buffer but it turns out that rsync loads the network far, far more heavily and causes the mount point to lock up on the client machine. So I wrote a script to watch for an idle network interface (stalled transfer) and kill rsync, which is run in a loop and does the remount thing if rsync dies or fails. ::until umount skorgu || umount skorgu -fl; mount //machine/share /mnt/point -ousername=username,password=password -tsmbfs && rsync /mnt/point to/path -avP -timeout=10 --inplace; do echo retrying; sleep 10; done And then this in another terminal (or subshell if you're feeling adventurous):

while ~/watch_for_idle_nic.sh eth1 2 10000 && killall rsync; do
  echo "killing";
  sleep 10;
done
watch_for_idle_nic.sh

(which could probably be simplified to a one-liner but I wasn't in the mood for golfing):

#!/bin/bash
LAST=$(ifconfig $1 | perl -ne 'print $1 if m/RX bytes:(\\d+)/')
sleep $2
THIS=$(ifconfig $1 | perl -ne 'print $1 if m/RX bytes:(\\d+)/')
DELTA=$(( THIS - LAST ))
echo "$DELTA"
while [[ $DELTA > $3 ]]
do
  LAST=$THIS
  sleep $2
  THIS=$(ifconfig $1 | perl -ne 'print$1 if m/RX bytes:(\\d+)/')
  DELTA=$(( THIS - LAST ))
  echo "$DELTA"
done

The echos aren't needed, only nice to look at.

So I think I need a new pair of pants

I've been following Couchdb's progress on and off for a bit now, and finally shaved enough corners off my pile of square tuits to actually fiddle with it. Setup an instance, got it running, made an ebuild for the current stable, etc. Very cool, and I was about to start coding a blog-type-thing (for here actually) in ruby. And then I read that not only was the XML output going to be changed to JSON (far saner), but that the entire query language was going away. Didn't make much sense to code to a dead interface, so I let the code REST (get it?).

Except now the excellent Jan has blown my tiny little mind. In a nutshell couchdb now returns JSON. Yawn. Oh and the query language has been re-coded in javascript. Yawn again. Did I mention that the querying method is a MapReduce? You pass a javascript function to CouchDB and it Reduces its dataset by that function. Yeah, if you yawn here you're clinically dead.

Couchdb is missing two things now: proper access control and the clustering niftiness that Erlang makes possible. Both of these are on the roadmap I understand, so I doubt I'll be waiting long. Once these are in place, the foundation is there for applications to be written in javascript, use couch as a integrally-versioned and collision-free (well, sort of) datastore, without involving a server side scripting language. Hell, you don't even need to involve HTML, just load up an empty HTML doc and modify the DOM to your liking. And maintain state without worrying about colliding with your own changes. Hell, you don't even need a webserver except to serve up the javascript.

I'm sick of javascript being treated as a second-class also-ran idiot language for making sparkly cursor trails. It's nearly a fully fledged functional language, and with Tamarin/Spidermonkey coming down the pipe on the client side, Couch handling datastores and querying, we can change the way javascript is used. It's not the assembly language of the internet, it's the LISP of the internet

If, and this is a big if, clustering and mapreducing come together in the obvious (but decidedly nontrivial) way, I really will wet myself.

Step into my office

Things to ask Lenovo about before buying a new laptop:

  • Are RAM and hdd upgrades warranty-voiding?
  • Is the 802.11N option Draft-2.0 compliant?
  • Is the 802.11N option able to operate on 802.11A networks?

Who needs Haskell anyway.

I switched to rtorrent from Azureus a while back, there's no reason for a bittorrent client to take a half-gig of ram. Unfortunately, that meant I had to find a replacement for the RSS feed reader plugin that I had been using. There are a metric ass-ton of these, but they seem to fall into two categories: big, windows GUI apps for the masses, and tiny scripts written by hackers. The first is useless to me, and if I'm going to parse obscure hacker code, I'll damn well write one myself.

So I did.

Because I'm strange, I tried to write it as Haskell in Ruby, i.e. with purely side-effect-free code. I mostly succeeded. I would have liked to get rid of the multiple references to the shows array, but alas I lost interest, so it's not quite a proper one-liner.

It's pretty cool if I do say so myself. It grabs the enclosure's url for matching shows, only grabs any single match once, sorted by how many optional flags it matches (HDTV, 720p, etc), works on an arbitrary number of feeds, shows, and flags, and spits out a wget line at the end so you can pipe it to sh as a cron job.

Of course, this assumes you have a bittorrent client that can monitor a directory for new torrent additions, like rtorrent

Here's the code:

::#!/usr/bin/ruby require 'rubygems' require 'hpricot' require 'open-uri' require 'pp' br/> feeds = %w( http://tvrss.net/feed/unique/ ) #season=/(\d{1,2})x(\d{1,2})/ #Doesn't do anything yet br/> shows = [ /top gear/i, /chaser/i, /weeds/i, /top gear/i ].insert(0, nil) flags = [ /720p/i, /HDTV/ ] br/> save_location = "/home/skorgu/data/torrents" br/> feeds.map{|f| Hpricot(open(f)).search("item").collect{|i| {'title' => i.at("title").inner_text, 'link' => i.at("enclosure")['url'], 'description' => i.at("description").inner_text }}}.flatten.sort_by{|i| flags.select{|f| i['title'] =~ f}.length }.inject([]){|memo, i| memo[shows.index(shows.detect{|s| i['title'] =~ s}) || 0] = i memo }[1..-1].each{|i| if i then fname = "#{save_location}/#{i['title'].tr(" ", "_").tr("^a-zA-Z0-9-_.", "")}.torrent" puts "wget -O \"#{fname}\" #{i['link']} || rm -rf #{fname}" if !File.exist?(fname) end }

Who?!

Brandon (ckb) is now following your updates on Twitter. Who? *clicks link* Hey this guy talks a lot about Model M keyboards...like the ones I use at home...that I bought from **c**licky**k**ey**b**oards.com. Waiiitaminute. Cool. A little random but cool. Now of course I imagine someone will begin complaining about the privacy implications of a vendor (presumably) using registration information for—what on earth would you call this? Account association? Sort of. Data mining? Not really, twitter feeds are public. Waiting for someone to say "ZOMG HOW AWESOME MY Model M FROM clickykeyboard.com IS"? Eh. I'm not bothered. My model Ms are pretty awesome, except for the one with a busted spring under the "T".

Yay

OK, while this works as a temporary (and easy) blog setup, I'll be using it solely to document progress on the evil plans that are afoot, namely CouchDB. Not that I'm competent to do more than gawk at the Erlang of it all, but as soon as the next release comes out this blog will be a vastly different beast, I assure you.

Yay for trivial wrappers!

I have a few simplification scripts here at work to grab some data from various sources and occasionally do something with it. As I was contemplating the best way to add some filtering ability to them it occurred to me that there's no reason at all to embed filtering ability in every single one when the UNIX way was avialable. So I thought and I thought and I procrastinated and eventually I found that someone had done the work for me (I love you internet): JsonPath. Export data in json, run it through this filter for simple stuff (or a proper python/ruby/whatever script for more complicated tasks) and have your action tool accept json as its import. Only problem is that it's in javascript and PHP and neither of those are really ideal for the command line. Enter Python-spidermonkey. Rough around the edges but it does what I need it to: :: #!/usr/bin/env python2.6 import sys import cjson from spidermonkey import Runtime import re rt = Runtime() cx = rt.new_context() cx.eval_script('r=[];') #print "Loading jsonpath..." with file('jsonpath.js') as f: cx.eval_script(f.read()) #print "Parsing json from stdin..." cx.eval_script("i=%s;" %sys.stdin.read()) # for (j in i){ n=i[j]; r.push(%s); }" % (sys.stdin.read(), sys.argv[1])) r = cx.eval_script("jsonPath(i, '%s');" % sys.argv[1].replace('\'', '"')) for line in r: print line #Should probably export json here but this works too Using it is easy: ::$ cat example.json | python pyjsonpath.py "$.store..price" 8.95 12.99 8.99 22.99 19.95 It's up at github if you'd like to extend it or add some error checking.