On being X-ish

Now that I have described how I graduated into Generation X, I have a secret to confess: I’m starting to think that that might not be entirely wrong.

Let’s stick to cohort effects here, since it’s supposed to be a cohort term. And I should add that this is all very trivial stuff, I’m focussing on media, pop culture and technology experiences.

One of the major temptations of identifying as Generation Y had to do with pop culture. My teenage years were just past the wave of slackers and grunge and Seattle. I probably heard Nirvana’s music during Kurt Cobain’s lifetime, but I didn’t know of them as a thing until about a year after he died. I’ve never even seen Reality Bites, but Ethan Hawke and Winona Ryder are both 10 years older than I am, and their movies weren’t about my cohort.

I am, frankly, Spice Girls age: not the pre-teen thrilled girls waving things to be signed, but the teenagers who actually paid for the albums with their own money. (I didn’t, for reference. We were a Garbage family.) Britney Spears was born in the same year as me, and her biggest year career-wise was my first year of university. And obviously, when the term “Generation Y” was coined, the stereotypes of late university/early career certainly fit my friends better than the Generation X tags with managerial aspirations. The return of cool people listening to cheesy pop: Y-ish. So that was where I felt I fell. (In case anyone I knew at high school drops by: I realise I wasn’t cool. But you may have been, and don’t think I didn’t notice you danced to the Spice Girls.)

But then, there’s certainly a few small societal boundaries between me and people who were born in 1986. (I have a sister born in 1986, and thinking about the five years between us is often telling.) Starting at a global level, I was reading Tony Judt’s Postwar recently (recommended, I’ll come back to it here at some point), and I was struck because I remember 1989.

To be fair, that’s more important if one lives in Europe, which I never have, but most of my first detailed memories of newsworthy events have to do with the revolutions of 1989 and the 1990 Gulf War. I remember the USSR, again, from the perspective of a young child who was growing up in Australia, but still. I can read the science fiction people smirk about now, the fiction with the USA and USSR facing off in 2150, and remember, a little bit, what that was actually about. This is, well, frankly, more than a little X-ish.

While we’re talking about defining events, I recall that quite a lot of people talked about the children who won’t remember 9/11. (And by children, I now mean 15 year olds, of course.) Obviously this is more important in the USA, perhaps a little like the European children (by which I mean 25 year olds) who don’t remember 1989 in Europe. I obviously remember 2001, and moreover remember the geopolitical situation in the years before it quite vividly too, and that latter is again, more than a touch X-ish.

Turning to technology, which is fairly defining for me, we’ll start with Douglas Adams:

Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.

Leaving aside the age effect where shortly everything cool will be against the natural order of things, it’s noticeable to me that the Web and email and so on fall in the “can probably get a career in it” bracket for me. Well, obviously not truly (the first version of the SMTP specification, which still more or less describes how email works today, was published in 1982), but my late teenage years were exactly the years when suddenly a lot of Australian consumers were on the ‘net. Hotmail was founded when I was 15 and I got an address there the following year. (icekween@, the address has been gone since 1999 and I’ve never used that handle since, partly because even in 98/99 it was always taken. But, actually, for a 16 year old’s user name I still think that was fairly OK considering some of the alternatives.)

In short, it was all happening in prime “get a career in it” time for me, and not coincidentally I am at the tail end of the huge boom in computer science enrolments and graduates that came to a giant sudden stop about two years after I finished. Frankly, X-ish. My youngest sister and her friends didn’t get excited about how they were going to become IT managers and have luxury yachts as a matter of course. (Well, partly age and partly not being jerks, there.) It’s a lot harder to get the “just a natural part of the way the world works” people excited about it.

Diagnosis: tailing X.

Name fields and UI

The AdaCamp Melbourne application form began with two fields: Your Name and Your Email. Seems fair enough! An unanticipated problem a few people have had with the forms is that they have entered “Your Given Name” and “Your Surname” instead, presumably trained to do this by umpteen million sites that want data entered that way. This leaves us with no email for them.

I don’t think the solution is to go with the flow, it buys into Falsehoods Programmers Believe About Names and the only thing it would get AdaCamp is the more-or-less correct alphabetising of the attendee list. (Only more-or-less correct: not only do given-surname name orderings vary among two-or-more-name cultures, so does the sort key, see eg Wikipedia’s manual instructing editors to sort Thai people by their given name.) But since we have no need to alphabetise the attendee list, it’s fine.

The best solution, I think, is to perform email address validation, which has its own problems (eg many validators use “is there a dot in the domain part?” which annoys the lucky people who have an email account at a top level domain no end) but gives us what we really need: a way to contact applicants!

linux.conf.au: program choices

I’m all but all booked in for linux.conf.au in Ballarat! (Need some accommodation in Melbourne for AdaCamp and to book the train to Ballarat.) So, time to share my early picks of the program:

Saturday (in Melbourne):

Monday:

Tuesday:

Wednesday:

Thursday:

Friday:

It’s skewed a little by my interests for the Ada Initiative now, that’s where all the mentoring stuff comes from. And I doubt I will get to all of this although presumably Valerie and I won’t be whisking people off to private meetings about the Ada Initiative as much. (At LCA 2011, when we were yet to launch it, we did almost nothing else.) It looks like Tuesday is a day to catch my breath before Wednesday. My family have decided to travel home Friday, so sadly Friday won’t be.

freelish.us: mental outage

It’s not absolutely clear to me that anyone at Geek Feminism has missed the linkspams, of which there hasn’t been one since 18th September. No one’s said anything, anyway.

What happened? freelish.us happened. Or it didn’t.

freelish.us, a bookmarking site using the open source status.net code, launched in April (April 1 actually, was that a good idea?) By that stage I was looking for an alternative to Delicious for bookmarking due to the new terms of service. I’ve been using flagship status.net site Identi.ca for microblogging for a long time (it cross-posts to Twitter) and while I’m inconsistent, I do like contributing to the commons to some degree, so a Creative Commons attribution bookmarking stream also appealed to me.

But the entire experience produced what I’d call “micro-burnout”. As in, I didn’t stop feeling pleasure or joy in stuff in general as would happen with burnout, but sharing links became a giant pain in the neck. Micro-burnout. Sharing links sucked.

First, there was the month or more on freelish.us where I just couldn’t seem to add bookmarks or import my Delicious backup file for love or money. I’d click “OK” and nothing would appear in my stream. It turned out that that was because I’d never validated my email address, but there was no error message to that effect, in fact no error message at all. I happened to see an understated warning elsewhere on the site that it was unvalidated, validated it, and suddenly the site actually worked.

Then there was the bookmarklet. The theory is visit a site, go to the bookmarklet, it’s bookmarked! On freelish.us it worked like this:

  1. go to the bookmarklet. This is pretty annoying in the first place, because I have a small laptop screen and bookmarklets require me to leave the bookmark toolbar visible. (I much prefer the Instaright approach, which places a small button in the URL bar, which is otherwise dead space anyway.)
  2. almost inevitably, find that I had been logged out of freelish.us, which must have had the most aggressively timed out cookies since linux.conf.au’s Zookeepr software (memo to Zookeepr: keep me logged in please)
  3. log in on the bookmarklet’s pop up
  4. be greeted with a small page saying I’ve logged in successful, but no sign of the entry form to bookmark what I needed reappearing
  5. back back back reload back back retry bookmarklet finally bookmark thing

And then, finally, on September 16, it and other status.net sites were taken down for upgrade. And now, nearly two months later, freelish.us home page still reads: “StatusNet cloud sites, including Identi.ca, are under maintenance. See status blog post for details and updates.”

Some facts about that:

  1. it’s not actually true any more: Identi.ca came back up after 24 hours or so
  2. it appears from comments there that any number of status.net sites are still down, and there’s been very little public comment on any of them that I can find. Several people asked specifically about freelish.us.

Also, freelish.us missed a probably once-off opportunity to captialise on the flight of horrified users of the new Delicious. But that’s not my concern.

All up, for two months the thought of bookmarking sites at all has made me distinctly “meh”, so, no linkspam for GF. This is what the software meh takes from the world.

I eventually decided that it was important to talk about what an annoying experience freelish.us has been, important enough to actually ask them for comment (via their press email contact). Here’s the information that as far as I can tell status.net has not communicated otherwise:

Q. What is the status of freelish.us? Is it going to return at some point or is it gone?

Evan Prodromou of status.net replied on the 30th October:

Freelish.us didn’t upgrade very well during the 1.0 process.

We’re moving to a new data centre this week, and I’m going to try to revive it then.

I fully intend to see it operational in early November.

There was a second question to which he didn’t directly reply, which was Q. In either event, is it possible for users of freelish.us to recover their bookmarks either for their own use or for import into another site? I take it from the lack of separate response that the re-appearance of the site will be the way in which users can recover their bookmarks and there is not an earlier alternative.

For the sake of the linkspams, I’m giving Pinboard a go. I’ll let you know how I do.

IPv6: encore

In which Mary does a lot of work on a comments policy in order to talk to herself about IPv6. True story.

Anyway, where we last left our heroine, she had found one unpromising (because unanswered) complaint describing her IPv6 problem. She tried updating the router firmware but it said it was the newest available firmware.

Some time later, our heroine found another account of the problem over on Server Fault where it was less likely to be lonely, and our heroine became convinced that she ought to install DD-WRT on her wireless router. Hey, maybe it would have worked, too. But our heroine’s husband likes his Internet to work, and gave her a sidelong look, whereupon our heroine at least deferring bricking her router until the weekend.

However! Our heroine is slightly bored of one of her day jobs, so today she idly searched for updated firmware and updated her D-Link DIR-615 router (C2 hardware edition) from firmware version 3.01 to 3.03WW (WW? I don’t get it either) and now she has a wireless router that does not send rogue IPv6 router advertisements to the network.

The end.

IPv6: finale in the key of D-Link

Background knowledge: this post requires some knowledge of networking, at least to the point of knowing what IPv4 and IPv6 are, and what is meant by subnet notation like “/60” and “/64”.

I believed for a very brief time that I’d beaten IPv6 into shape but soon my husband started complaining that sometimes it worked, sometimes it didn’t, and basically questioning whether it was worth any more late nights. (I would poke things, we would jointly debug them, IPv6 involved us skipping dinner two nights in a row in the end.)

Basically what would happen was that anything we tried to connect to over IPv6, most noticeably Google itself (because they trust Internode’s IPv6 routing enough to have turned on IPv6 access for their customers) would either work or just hang. I vaguely suspected some kind of routing error.

Here’s something to try if you have mysterious intermittent IPv6 dropouts or hangs: watch the output of radvdump closely. What you are looking for is any router advertisements coming from a second source: rogue RAs was the search term I was using somewhat in vain.

Unfortunately, if you find such a thing, there are essentially two options (much as you do if someone has put a rogue DHCP server on a network). One is to remove the rogue device from the network, the other is to firewall its announcements away from your clients. Unfortunate in my case, that is, because it emerged that the source of the announcements was our D-Link wireless router (which, per the previous entry, we run as a switch). Removing a wireless switch from our network would have the unacceptable side-effect of re-introducing strings of blue cable to our home, and it’s pretty hard to firewall your switch itself. So in our case, the answer for the present time is to give up on home IPv6.

Overall, although the reason we gave up on IPv6 was not a Linux problem, I have to say that I was really surprised how immature Linux’s tools are at this point. The fundamentals exist: kernel support, DHCPv6 and stateless configuration servers and clients. As an IPv6 client, Linux is doing OK. If you connect a Linux machine to a network that happens to be using IPv6, it’ll likely Just Work. But at the tools and packaging level there’s still loads of gaps along the lines of:

  • iptables and ip6tables are entirely separate programs, so you get to have your firewall configuration fun twice! (However, UFW handles this fairly nicely, if you’re in the market for a thin-ish wrapper around iptables.)
  • configuring ppp for IPv6 is like ppp for IPv4 circa 1999 or 2000 or so. Things like the “oh yeah, for a reason no one knows, you won’t get a default route, so here’s a little script that will bring one up for you” (see Shane Short’s blog entry)
  • radvd is a fairly crucial tool, but there aren’t a lot of example config files for different situations that I could find, and the man page assumes that you know a lot about router advertisements already
  • if you want to use Ubuntu’s supported DHCP server (isc-dhcp-server) for DHCPv6, you need to write it a second init script and config file yourself

So after all that you might be tempted to use a dedicated router for IPv6 and I’d sympathise except that the D-Link device does it even worse than Linux. Not promising. I can’t see that moving many ADSL users over to IPv6 is going to happen any time soon.

IPv6: prelude in the key of radvd

Background knowledge: this post requires some knowledge of networking, at least to the point of knowing what IPv4 and IPv6 are, and what is meant by subnet notation like “/60” and “/64”.

I’ve just changed ISPs, because I wasn’t much of a fan of my old ISP’s demand that either we enter into a new 12 month contract before 27 November or they’d consider us re-contracted at that date. My new ISP is Internode, Australia’s favourite geek ISP, in part because they offer native IPv6 and it’s even supported by customer service. It took me an entire 24 hours to succumb to the temptation of wrecking my perfectly good home network by attempting to make it IPv4/IPv6 dual stack, partly motivated by Geoff Huston’s “the sky is falling” keynote at linux.conf.au 2011. I like doing my bit to hold up the sky.

I use a Linux machine as our router rather than a consumer router device, that is, my ADSL modem is set to bridge mode and we use our wireless router just as a switch; neither of them do routing. (Or shouldn’t, but we’ll get to that.) In terms of resources for doing this with Internode, or any other ISP who will advertise your IPv6 routes via DHCPv6, here’s some useful material:

The main problem I had is that for as yet unexplained reasons, while this radvd.conf stanza worked fine when my Linux server ran Ubuntu 11.04 with radvd 1.7, it doesn’t work on Ubuntu 11.10 with radvd 1.8:

prefix ::/64 {
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr on;
};

radvd 1.8 was advertising this in such a way as to get my Linux client to give this error (in /var/log/syslog):

IPv6 addrconf: prefix with wrong length 60

That is, it seems to have been advertising the entire /60 that Internode routes to each customer rather than a single /64. We ended up having to do something like this:

prefix 2001:db8:aaaa:bbbb::/64 {
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr on;
};

That is, because Internode’s IPv6 allocations are static, we just manually picked a /64 out of the /60 allocated to us, and advertised that. I’m not clear if this a bug or a change in the way radvd works or a mistake of mine, we never got a chance to find out because of a showstopper which you’ll see in the next, and at this stage, final post in my adventures in IPv6.

GNOME Shell versus Unity

I upgraded my laptop to Ubuntu 11.10 today. I used Metacity+GNOME Panel through the previous version of Ubuntu as Unity crashed annoyingly on my laptop (tending to leave me looking at my background image, which is a cute picture of my son but even so) so this is my first Ubuntu version with the new shiny.

What’s annoying me right now is that they both have features I really like. I’ve only played around for a few hours so possibly one can be configured to have the good features of the other; these are from the default functionality on 11.10.

Unity: my laptop doesn’t have a lot of screen real estate, so I love the integration of the menu bar of windows into the top panel (called global menu). I like having those 20 pixels or so back!

GNOME Shell: I love the Activities mode in general! The presence of workspace previews that don’t require me to keep holding down the Alt part of the Alt-Tab combo is lovely, and the favourites menu on the left seems easier to edit than Unity’s. On the balance, I’d say I prefer GNOME Shell, but damn, global menu is a killer feature on my smaller screen. I’ll watch the global menu patch closely.

(Meanwhile, while writing this entry I discovered that Firefox’s right-click menu is broken in Unity—it disappears as soon as I move my mouse—which is a rather compelling reason to use GNOME Shell.)

Copyright hell: larrakins and astrologers

This article originally appeared on Hoyden About Town.

People who support a reasonable balance between encouraging creation of artistic works by allowing creators to profit from them, and the interests of wider society in benefiting from the free availability of creative works (or even of facts) aren’t having a good day.

Larrikin vs Australian Music

Skud has covered this over at Save Aussie Music:

Today EMI Australia lost their High Court appeal against Larrikin Music in the Kookaburra/Land Down Under case…

Leaving aside the problems with the copyright system, let’s just take a moment to look at Larrikin, the folk music label that holds the rights to “Kookaburra”. Larrikin was founded in 1974 by Warren Fahey, and sold to Festival Records in 1995. Festival, owned by Murdoch, was shut down and its assets sold to Warner Music Australia in 2005, for a mere $12 million.

Larrikin was home to a number of Australian artists, among them Kev Carmody, Eric Bogle, and Redgum

Kev Carmody, one of Australia’s foremost indigenous musicians, released four albums on Larrikin and Festival between 1988 and 1995, none of which are available on iTunes nor readily available as CDs (based on a search of online retailers). …

Warner bought Larrikin Records’ assets — two decades of Australian music — not because they want to share the music with the public, but to bolster their intellectual property portfolio, in the hope that one day they’ll be able to sue someone for using a riff or a line of lyrics that sounds somewhat like something Redgum or Kev Carmody once wrote. They do this at the expense of Australian music, history, and culture.

Lauredhel covered the case earlier at Hoyden too, focussing on whether the claim of infringement stands up to a legal layperson’s listen test and musical analysis: You better run, you better take cover.

Astrologers versus software creators and users

Have you ever selected your timezone from a list which lists them like this: “Australia/Sydney”, “Europe/London”? Then you’ve used the zoneinfo database.

Timezones are complicated. You can’t work out what timezone someone is in based purely on their longitude, have a look at this map to see why. Timezones are highly dependent on political boundaries. On top of that, daylight savings transitions are all over the map (as it were). Some countries transition in an unpredictable fashion set by their legislature each year. Sometimes a sufficiently large event (such as the Sydney Olympics in 2000) causes a local daylight savings transition to happen earlier or later than that government’s usually predictable algorithm.

Therefore computer programs rely heavily on having a giant lookup table of timezones and daylight saving transitions. Data is needed both for the present, so that your clock can be updated, and for the past, so that the time of events ranging from blog entries to bank transactions can be correctly reported.

A great deal of software, including almost all open source software, relies on the freely available database variously called the tz database, the zoneinfo database or the Olson database.

Arthur David Olson (the “Olson” in “Olson database”) announced yesterday:

A civil suit was filed on September 30 in federal court in Boston; I’m a defendant; the case involves the time zone database.

The ftp server at elsie.nci.nih.gov has been shut down.

The mailing list will be shut down after this message.

The basis of the suit is that the zoneinfo database credits The American Atlas as a source of data, and The American Atlas has been purchased by astrology company Astrolabe Inc, who assert that the use of the data is an infringement of their copyright. Whether this is true is apparently highly arguable (in the US it seems to hinge on whether it’s a list of facts, which aren’t copyrightable) but in the meantime the central distribution point of the data is gone. And it could be a long meantime.

Now, people still have copies of the database (if you run Linux you probably do yourself). However, the source of updates has been removed, which means it will be out of date within a few weeks, and the community that created the updates has been fractured. Various people are doing various things, including a defence fund, a fork of the mailing list, and discussions about re-creating or resurrecting the data in other places. All a great waste of many creative people’s time and money, gain to society from Astrolabe’s action yet to be shown.

More information:

Update (Oct 17): ICANN takes over zoneinfo database

On 14th October the Internet Corporation for Assigned Names and Numbers (ICANN), which manages key Internet resources (notably, the global pool of IPv4 and IPv6 addresses) on behalf of the US government, put out a press release (PDF) announcing that they were taking over the zoneinfo database:

The Internet Corporation for Assigned Names and Numbers (ICANN) today took over operation of an Internet Time Zone Database that is used by a number of major computer systems.

ICANN agreed to manage the database after receiving a request from the Internet Engineering Task Force (IETF).

The database contains time zone code and data that computer programs and operating systems such as Unix, Linux, Java, and Oracle rely on to determine the correct time for a given location. Modifications to the database occur frequently throughout the year…

“The Time Zone Database provides an essential service on the Internet and keeping it operational falls within ICANN’s mission of maintaining a stable and dependable Internet,” said Akram Atallah, ICANN’s Chief Operating Officer.

I wonder if ICANN’s not-for-profit status is useful here. Just as Project Gutenberg can make United States public domain texts available globally, even though texts published prior to 1923 are not public domain world-wide, ICANN may present a less tempting target for lawsuits than other possible homes for the zoneinfo database.

Book review: In the Plex

Steven Levy, In the Plex: How Google Thinks, Works, and Shapes Our Lives

This book started off annoying me by being a little too worshipful of Larry Page and Sergey Brin, in my opinion. So clever! So Montessori! These cheeky little geniuses will rock your world! They’re going to take over your brain and you’re going to like it! But it improved early on other histories I’ve read of Google (lest this sound like an unfortunately dull hobby of mine, I mean shorter essays over a period of ten years or so). which tend to focus on a couple of things heavily: the Google Doodles and their approach to raising venture capital. I’ve heard about all I ever want to hear about doodles and Google’s fundraising. Levy doesn’t quite stay away from the latter but it’s mercifully short at least. Instead he gets into things that are more interesting to me, namely the engineering.

He spends a fair portion of the book getting to grips with the basic design of and use-cases of the two key Google products, search and ads, in a way that’s useful to me as someone with a software engineering background, so that was a win. I’m not sure how that would read to people without said background although it didn’t strike me as very technical. Later it deals with some of Google’s key expansions: the creation of its massive set of data centres, the Youtube acquisition, the attempt to become a major search player in China, book scanning and search, and finally, social.

I’ll certainly give Levy credit for finally explaining to me the wisdom that Google “doesn’t get social”, which I hear everywhere and which no one has ever given me a bite-sized cogent explanation for. (This is a terrible admission from someone who is meant to have some idea about the tech industry, yes? But I’m not really your go to person for social either. I use it, but I don’t make sweeping claims about it.) Levy’s bite-sized explanation: Google is philosophically committed to the best answers arising from processing huge amounts of data, and is resistant to cases where the best answers arise from polling one’s friends. Whether it’s true I have no idea but at least it’s truthy.

Levy has created a good history of Google for people especially interested in Google I think, but he largely hasn’t jumped over the bar of making Google into an interesting story for people who don’t have an existing interest in it, in the way that people have done with Enron, for example. There are parts of it that start to get close, particularly the treatment of Google’s expansion into China and its sometime Beijing office. But it’s not quite there. Possibly Levy didn’t have access to enough critical sources, or, if he did, he didn’t use them to their full extent for fear of jeopardising his access to Page, Brin and Eric Schmidt and to the Google campus. (Also, it sounds like Google makes it very hard for any current employee to be an anonymous source.)

Read it if: you are interested in the history of Google, and find them impressive. You don’t need to be a complete fanboy.

Caution for: as noted, not really a book for people seeking a rollicking good story of corporate ups and downs in general; and not really for people looking for really sharp criticism of Google either, although his critical distance certainly increases as the book goes on.