From comments: the revolution will not be tweeted?

This article originally appeared on Geek Feminism.

jon asked in comments:

I wonder, what would a feminist- and womanist-oriented social network look like?

(We might have readers unfamiliar with the term “womanist”, if so, see Renee Martin’s I’m not a feminist (and there is no but) and Ope Bukola’s meta-discussion following from that.)

I find this question a lot easier to answer in the negative (“what wouldn’t a feminist- and womanist-oriented social network look like?”), and my answers would include things like:

  • packaging women users as a demographic product for sale to advertisers
  • packaging women users as a demographic product for sale to people seeking relationships with women
  • packaging women’s lives and identities as a product for the entertainment of other users

Ditto for replacing women with other marginalised or oppressed users. But I find it harder to answer it in the positive. What do you think?

Viewing attachments when using mutt remotely

Yes, that’s right, I’m still in the dark ages and do not yet use Gmail for my email. Even though it has IMAP and everything. I still use Mutt.

I almost always use Mutt locally, using offlineimap to sync IMAP folders to local maildirs. This means I don’t usually have the problem of being unable to view non-text attachments. However, for the next little while I’ll be using Mutt on a remote connection.

Don Marti has one solution to this, which assumes that you are accessing the server with Mutt on it via SSH (probably true) and are easily able to create a tunnel to your local machine, which is trivial if you are using a commandline ssh client, but while you can do it with PuTTY I figured it was just annoying enough that I might not bother. (And I doubt you can do it at all with those web-based SSH clients.)

My alternative assumes instead that you have a webserver on the remote machine that has mutt on it. It then just copies the attachment to a web-accessible directory, and tells you the URL where you’ll be able to find the attachment. It’s thus a very trivial script (and I doubt very much it’s the only one out there), but perhaps using mine might save you fifteen minutes over coming up with your own, so here it is:

copy-to-dir.sh (in a bzr git repo)

Sample output is along these lines when you try to view an attachment in Mutt:

View attachment SOMEPDF.pdf at http://example.com/~user/SOMEPDF.pdf Press any key to continue, and delete viewable attachment

In order to use it, you need to:

  1. copy the script to the remote machine where you use mutt;
  2. make it executable;
  3. edit it to set the OUTPUTDIR and VIEWINGDIR variables to something that works for your setup;
  4. set up a custom mailcap file much like the one Don Marti gives, that is, put something like this in your ~/.mutt-mailcap:
     text/*; PATH-TO-SCRIPT/copy-to-dir.sh %s application/*; PATH-TO-SCRIPT/copy-to-dir.sh %s image/*; PATH-TO-SCRIPT/copy-to-dir.sh %s audio/*; PATH-TO-SCRIPT/copy-to-dir.sh %s 
  5. set mailcap_path = ~/.mutt-mailcap in your ~/.muttrc file.

Something like this probably could work for Pine and other text-based email clients used remotely too, but I’m not sure howi because I don’t use them. And if someone wants to document this in a way that assumes less pre-existing knowledge, go ahead.

Also, making your attachments web-accessible means that they are, well, web-accessible. I’ve set up a HTTP Auth-protected directory using https for this, you should think about your own setup too.

Creative Commons License
Viewing attachments when using mutt remotely by Mary Gardiner is licensed under a Creative Commons Attribution 4.0 International License.

Who you speak to and where you are: why it matters

This article originally appeared on Geek Feminism.

Warning: this post discusses intimate partner violence and rape. Please place a trigger warning on links to this post.

If you are currently at risk of violence, here are some links for viewing when you’re on a safer computer: National Network to End Domestic Violence: Internet and Computer Safety [USA], Washington State Coalition Against Domestic Violence: Internet Safety [USA] and Domestic Violence Resource Centre Victoria: Tip Sheet: Technology Safety Planning [Australia].

Cross-posted to Hoyden About Town.

Abusive relationship and spousal rape survivor and blogger “Harriet Jacobs” at Fugitivus is angry and scared today:

I use my private Gmail account to email my boyfriend and my mother.

There’s a BIG drop-off between them and my other “most frequent” contacts.

You know who my third most frequent contact is?

My abusive ex-husband.

Which is why it’s SO EXCITING, Google, that you AUTOMATICALLY allowed all my most frequent contacts access to my Reader, including all the comments I’ve made on Reader items, usually shared with my boyfriend, who I had NO REASON to hide my current location or workplace from, and never did.

My other most frequent contacts? Other friends of [my ex-husband]’s.

Oh, also, people who email my ANONYMOUS blog account, which gets forwarded to my personal account. They are frequent contacts as well. Most of them, they are nice people. Some of them are probably nice but a little unbalanced and scary. A minority of them ”” but the minority that emails me the most, thus becoming FREQUENT ”” are psychotic men who think I deserve to be raped because I keep a blog about how I do not deserve to be raped, and this apparently causes the Hulk rage.

There’s lots of other comment today on Google’s Buzz automatically assuming that your frequent email contacts should be your Buzz contacts, and making the connection with them public:

There will quite possibly be more by the time I’ve finished writing this post, let alone by the time you read it. But having to fight this battle on a site-by-site, service-by-service basis is disgusting. For a number of groups of people, including people who are the targets of a violent obsession among others, information about who they are in contact with, where they live and what they’re interested in has life-threatening implications. For a larger number of people it has non-life-threatening but potentially serious implications for their job, for example, or their continuing loving relationship with their family. Sometimes people are in frequent contact with people who have power over them, and/or who hate them. Why aren’t privacy policies centring that possibility, and working out the implications for the rest of us later?

Note: as I hope you anticipate, attempts to victim-blame along the lines of “people who are very vulnerable shouldn’t use technology unless they 100% understand the current and all possible future privacy implications” not welcome.

Update 13th February: Fugitivus has had a response from Google making it clear that protected items in Reader were not shared despite appearances, and stating some changes that are being made in Reader and Buzz in relation to issues she raised.

Donating our OLPC XO

Way back at linux.conf.au 2008 there was a large OLPC XO giveaway, but with the rider do something wonderful with this, or give it to someone who will. Neither Andrew nor I received one directly, but Matthew Garrett gave his to Andrew essentially on the grounds that he wasn’t going to do anything wonderful with it. (If I have the chronology right, Matthew had a stack of laptops in his possession at the time and did things to them regularly, generally making them sleep on demand.)

In any event, neither Andrew nor I did anything wonderful with the XO: Andrew intended to look at some point at Python or Python application startup times (the Bazaar team have a bunch of tricks in that regard), but two years is a lot of intending.

Still, better late than never. In the spirit of the original giveaway, we’ve handed it over to be taken to New Zealand by someone going to linux.conf.au 2010. It will be donated to the Wellington OLPC testers group, who meet weekly to work on various projects and who are somewhat short on machines.

If you are similarly (morally) bound by the linux.conf.au 2008 giveaway conditions, aren’t doing anything wonderful with your XO, and are going to linux.conf.au 2010 or can get your XO there, you could do likewise. You could drop off to Tabitha Roder at the education miniconf, the OLPC stand at Open Day or otherwise get in touch with her. (You probably want to let her know yours is coming anyway, so she has a sense of whether to expect one or two, or a truckload.)

Other possibilities include getting involved in the Sydney group or checking if they’d have a use for laptop donations. (They meet more regularly than that wiki page implies; they are now meeting at SLUG.) I don’t know what the status of the OLPC library is. The webpage being down is probably not a great sign, but perhaps collaborators would help John out there. You’d at least be doing something meta-wonderful.

Clean up IMAP folders

Per Matt Palmer’s blog entry OfflineIMAP and Deleting Folders users of any mail sorting recipe that creates new mail folders a lot tend to find that over time they accumulate a lot of mail folders for, eg, email lists they are no longer subscribed to. And most IMAP clients will waste time checking those folders for new mail all the time.

Matt wrote:

Now, of course, someone’s going to point me to a small script that finds all of your local empty folders and deletes them locally then issues an IMAP “delete folder” command on the server. But I had fun working all this out, so it’s not a complete waste.

I haven’t quite done this, I’ve just written a script that detects and deletes empty remote folders. (For me, offlineimap does not have the behaviour of creating new remote folders, so I haven’t bothered cleaning up local folders.)

It’s good: it’s speeding up my mail syncs a whole lot, deleting these old folders I haven’t received mail in for about five years. I’ve got full details and the script available for download (as you’d expect, it’s short): Python script to delete empty IMAP folders.

OLPC

This morning at Bruce Schneier’s keynote it was announced that they wanted to give a One Laptop Per Child XO laptop to the people at the conference who were going to do something incredibly cool with it. Except… they didn’t have a way of determining who those people were. So, they were given away to conference attendees whose names were chosen at random. The condition is that they recipients should either do something wonderful or pass it on to someone who will.

Did we get one? No. But Matthew Garrett gave us his. And by ‘us’ I mean ‘Andrew’. But still.

Ideas for wonderful things accepted.

My talk at OSDC: the Planet Feed Reader

I gave a thirty minute presentation at the Open Source Developers’ Conference yesterday about the Planet software and the associated communities and conventions, focusing more on the latter since one of my reviewers suggested that the social aspects are more interesting than the code. My slides [PDF format, 2.1MB] are now available for the amusement of the wider public.

Much of the discussion of history was a recap of my Planet Free Software essay and the discussion of Planet conventions was a loose recap of accumulated wisdom, including:

  1. using bloggers’ real names, or at least the ones which they attach to email (usually real names) in addition to common IRC/IM handles is useful for putting faces to blog entries to contributions;
  2. once the convention of using real faces and real names is established, people get upset when the conventions are broken (quoth Luis Villa: I’m not sure who/what this ubuntu-demon is, but ‘random animated head without a name meandering about doing a lot of engineering work to fix a problem that should not exist’ was not what I was looking for when I was looking for information on planet ubuntu); and
  3. life blogging is of interest to an extent, many developers would actually like to feel that they’re friends with each other, but the John Fleck case on Planet GNOME shows that there are limits.

Much of the rest was due to Luis Villa’s essay on blogging in the corporate open source context, but as I wasn’t allowed to set assigned reading to the audience I was able to pad out by half an hour by including that content.

Mostly it was a fun experiment in doing slides in a format other than six bullet points per slide, six slides per section, six sections per talk format; incorporating badly rescaled images in various places; and using Beamer so I was surprised to end up hosting a Planet BoF (Birds of a Feather) session, discussing it from the point of view of someone running a Planet (the editor). Some of the topics that came up were:

  • trying to start communities via Planet sites, rather than enhancing them, by, say, starting a environmental politics Planet;
  • the possibility of introducing some of the newer blog ideas to the Free Software world (like carnivals);
  • allowing a community to edit a Planet, and editorial policies in general;
  • potential problems with aggregating libellous or illegal content (another reason some editors apparently insist on real names);
  • alternative aggregators;
  • banning RSS in favour of Atom;
  • whether it is possible or wise to filter people’s feeds without their consent;
  • moving to the Venus branch of Planet; and
  • making Venus trunk.

I may propose a blogging BoF at linux.conf.au and, if I do so, I’ll even plan some discussion points, which will make it less random.

Logging into the OSDC wireless network

I have a wireless login script for attendees of OSDC who use Ubuntu, Debian, or anything else that can run scripts on connecting to a network and has essentially the same iwconfig output:

 eth1      IEEE 802.11g  ESSID:"Monash-Conference-Net" Mode:Managed  Frequency:2.437 GHz  Access Point: 00:13:7F:9D:36:C0 

To save some tiny amount of time when connecting to the wireless, stick my osdc-login script in your /etc/network/if-up.d directory or equivalent and give it similar permissions to what’s already in there. You can get the latest version of the script at https://gitlab.com/puzzlement/osdc-2006-monash-wireless-login/raw/master/osdc-login, or through Bazaar git, with the repository at https://gitlab.com/puzzlement/osdc-2006-monash-wireless-login/tree/master. It’s very small, but feel free to send me improvements (although if using Bazaar git, please don’t check in a version containing the real username and password).

You need to replace INSERTCONFERENCELOGINHERE with the appropriate username and INSERTCONFERENCEPWHERE with the password. By running the script you will be agreeing to Monash’s terms of service, which are here.

Syndication, aggregation, and HTTP caching headers

Syndication, aggregation, and HTTP caching headers

I’ve seen various people in various places lately who were very unhappy about someone requesting their RSS feed every 30 seconds, or minute, or half hour, or whatever, and re-downloading it every time at a cost of megabytes in bandwidth. I’ve also seen people growing unhappy with the Googlebot for re-downloading their entire site every day.

So, a quick heads-up: there is a way for a client to say “hey, I have an old copy of your page, do you have anything newer, or can I use this one?” and for the server to say “hey, I haven’t changed since the last time you viewed me! use the copy you downloaded then!” Total bandwidth cost: about 300 bytes per request. That’s still a bit nasty for an ‘every 30 seconds’ request, but it means you won’t get cranky at the 10 minute people anymore. Introducing Caching in HTTP (1.1)!

The good news! Google’s client already does the client half of this. Many of the major RSS aggregaters do the client half of this (but alas, not all, there’s a version of Feed on Feeds that re-downloads my complete feed every half hour or so). And major servers already implement this… for static pages (files on disk).

The bad news! Since dynamic pages are generated on the fly, there’s no way for the server software to tell if they’ve changed. Only the generating scripts (the PHP or Perl or ASP or whatever) have the right knowledge. Dynamic pages need to implement the appropriate headers themselves. And because this is HTTP-level (the level of client and server talking their handshake protocol to each other prior to page transmission) not HTML level (the marked-up content of the page itself), I can’t show you any magical HTML tags to put in your template. The magic has to be added to the scripts by programmers.

End users of blogging tools, here’s the lesson to take away: find out if your blogging software does this. If you have logs that show the return value (200 and 404 are the big ones), check for occurrences of 304 (this code means “not modified”) in your logs. If it’s there, your script is setting the right headers and negotiating with clients correctly. Whenever you see a 304, that was a page transmission saved. If you see 200, 200, 200, 200 … for requests from the same client on a page you know you weren’t changing (counting all template changes), then you don’t have this. Nag your software developers to add it. (If you see it only for particular clients, then unfortunately it’s probably the client’s fault. The Googlebot is a good test, since it has the client side right.) An appropriate bug title would be I don’t think your software sets the HTTP cache validator headers, and explain that the Googlebot keeps hitting unchanged pages and is getting 200 in response each time.

RSS aggregater implementers and double for robot implementers: if you’ve never heard of the If-None-Match and If-Modified-Since headers, then you’re probably slogging any page you repeatedly request. Your users on slow or expensive connections hate you, or would if they knew the nature of your evil. Publishers of popular feeds hate you. Have a read of the appropriate bits of the spec and start actually storing pages you download and not re-downloading them! Triple for images!

Weblog and CMS software implementers: if you’ve never heard of the Last-Modified and/or ETag headers, learn about them, and add the ability to generate them to your software.

Creative Commons License
Syndication, aggregation, and HTTP caching headers by Mary Gardiner is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.