I Would if I Could, ICANN, so I Won’t.

There are two different problems inherent in this story about the new top-level-domain “.kosher”.

Problem one: the o-k is a major, reliable, and reputable supervisor, but they are neither the only nor the largest of supervisions. “Which supervision agencies are acceptable” is an excellent question – one which is really (theoretically) best directed at one’s LOR (local orthodox rabbi), although KosherQuest is a great place to start. A glance over at KosherQuest shows a gazillion supervisions which are widely considered reliable, and then there are another gazillion which aren’t on that list. So it’s a bit concerning that a single agency would establish a worldwide monopoly from an Internet perspective.

Problem two: the new TLDs are stupid from the get-go, and this is a great example of why. In the way that .museum is not really used by anyone (hint: what domain do you think the Smithsonian or the Louvre, or the Getty use?), this too is redundant.

(Oh, and don’t give me any static about all of the names listed in the full second-level search of .museum – those are largely redirects to the actual, real domains used by the museums, which are held in other TLDs. Why, precisely, is “search” such a problem? I know that I would have an easier time going to a preferred search engine looking for “Louvre”, and that search engine will even correct my spelling and send me to the page appropriate for my browser language preference, while the .museum redirect is top-level only.) Another fight that’s going on is over the .amazon between the purveyor of pretty much everything and the countries which have a similarly named jungle.

Would we end up with ok.kosher, ou.kosher, stark.kosher etc? Personally, I’d want to buy the second-level domain “porkisnot”, or perhaps “ikeep”. But please, ICANN, reconsider this foolishness.

Complexity lishmah

I have seen several styles of queueing in practice recently.

Trader Joe’s in dc has a long snaking line feeding a whole bunch of register in a strictly FIFO manner. An employee stands between the registers and directs customers to the next open register.

Safeway has a traditional grocery store model, where there are lots of independent queues, with a few express lines as well.

CVS has almost entirely switched over to a small number of self-checkout kiosks. There is no real concept of queueing- people gather in a bunch and self-organize who’s next. This reminds me of “lines” that I saw in Israel.

Costco has a bunch of independent queues, but has no express lanes (besides, who goes to Costco for fewer than 10 items anyway?)

So these are over the place – it should be obvious that FIFO will on be the best strategy (i.e. have the shortest average wait time) while Safeway offers the highest throughput priority queue (which it achieves by reducing the number of queues available to non-priority customers, and thereby both increasing the average wait and increasing the speed differential between the priority and regular queues).

The CVS approach is a random lottery: there’s no way to predict all of the possible situations, and the efficiency loss is a greater than linear function of the number of people present (think CSMA/CD in a half-duplex environment).

And of course the Costco approach is the most familiar: its average wait will be a little bit better than Safeway, but the range of wait times will be larger than that of the non-priority queue at Safeway.

And now I’ve seen something new: the new Whole Foods in Foggy Bottom has a queueing strategy I’ve never seen in a store before. There are a variable number of lanes which are drained in a not-quite round-robin fashion, and they are drained to a common pool of tellers. The draining process causes a fair amount of confusion as people are sent to registers variable distances from the queueing line. The confusion increases when multiple lanes are drained at the same time.

My gut reaction to it is that it’s needlessly complex- it’s esoteric for it’s own sake, like much modern art and architecture. I prefer simple to complex as a general rule, which is part of why I am skeptical of technocratic social engineering and the like. Gears and other simple machines let you move power from one place to another, but even the best machines lose work to friction.

Anyway, given that the WF approach does not differentiate between the lanes, I’ve had a hard time understanding why they wouldn’t be better off with a strict FIFO line. I still don’t understand it, but my theory is that the WF designers thought that the psychological problem of equal queues (I always get the slow line) was better than the psychological effect of the snaking FIFO line. I’ve heard a tremendous number of people complain about the length of the TJ’s line, even though they’ll get through it faster than any other type, so perhaps WF has a point.

As for me, I’d take FIFO any day- it’s a truly fair approach to queueing, and is a maximally efficient use of resources, and that appeals to my conservationist (née hippy) side.

Complexity

A thread regarding the recent Amazon cloud outage on NANOG pointed me to this excellent post by John Ciancutti regarding Netflix’s approach to managing cloud services.

The most profound lesson I draw from Ciancutti’s post is #3 – “The best way to avoid failure is to fail constantly.”

As someone who works on large complex, highly-interdependent systems, this is absolutely essential – anything which can fail, will, and will do so in the most unexpected way. The best systems I’ve seen (from a resilience point of view) have a whole lot of engineering work put into separating functions so that while a single event can be disruptive, recovery is both possible and quick.

A significant website (>$50K / minute) had a 10 minute outage caused by a routine maintenance. After the sleuthing, it turned out that a little performance analysis widget on the site had a hidden dependency on a single-homed server in the “non-critical” portion of their server farm, and the routine maintenance had rendered that inaccessible.

That website, had they followed the Netflix “chaos monkey” approach, would have discovered this dependency and either made the widget more robust or scrapped it entirely – as it was, a good business case can be made that the value of the data learned by the widget was far less than the $100K which the outage caused.

Winsome, lose some

So the band has been working on laying down basic tracks for our album, and our setup up until recently was a 1st-generation Macbook pro (Core Duo) with an Alesis io26 Firewire interface, using GarageBand (!) and it met our needs pretty well – although the speed of the processor doing things like saving or initializing left something to be desired.

Well, that laptop finally died – it was having a rough time of it (water spill), and then after being left plugged into the firewire long enough, the battery just wouldn’t hold a charge anymore, and then it got to the point that enabling phantom power for the condenser mics was just too much for the thing, and it would crash spectacularly.

So I started using Sarah’s laptop – a more recent generation MBP (i5), and we were noticing a strange bit of static in the playback – I thought that this was merely an artifact of a crummy headphone cable. It turns out that I had forgotten that years ago when I had first gotten the Alesis, that installation takes a little doing: you have to install drivers from the Alesis site or a CD, and also install a hardware device monitor (basically a soft-control for the whole system), and you’re supposed to do this before activating it at all. I hadn’t done this for Sarah’s laptop, so it was using the embedded driver via the Firewire cable (labeling it “Alesis 1394” which should have been a clue: nothing Mac-like uses the “1394” nomenclature).

At least I’ve now figured this out, so we have only lost two recording sessions, and can hopefully make those up relatively easily. It’s too bad: the take of Don’s song Wanna was really excellent, but the static is quite unpleasant, and I don’t think it’s fixable.

In any case, I had gotten a Mac Mini for just this purpose, only to find out that the monitor I was planning to use had an ADC connector rather than something that anyone actually uses. The ADC DVI converter was discontinued by Apple last October, and is now ridiculously priced, so a cheap monitor is in my very near future.

They don’t make them like they used to

I am in the process of wiping the drives of two computers which are about to be donated to a HS computer science class – a first generation macbook pro laptop with a severely damaged keyboard and dead battery, and a G4 cube. The macbook is a perfectly nice firewire target, and the “erase” worked exactly the way I expected. The cube, on the other hand, won’t work as a firewire target because I’d have to wait, and I don’t like waiting.

So I managed to find the initial installation disk that came with the cube (!) which I actually still had because I am a huge pack rat, and it turned out that it’s OS9. I had forgotten that we got the cube during the 3 months when it was the hot new thing – in 1999 – and it’s moved through several apartments and houses. That is a really nice piece of engineering: it, like the Newton, was before its time – the mac mini now fills a similar role, and there are lots of tiny tiny desktops out there, but the way that cool air is drawn from the bottom up through the chassis is a thing of beauty.

Also surprising to me is that firewire 400 ports and USB 1.0 ports were brand new on the cube, but those peripherals still work just fine with modern machines. Good job, Apple!