Archive for the 'python' Category

Talking PyCon UK


I’m always so grateful for the speakers. For without speakers there would be no conference, and without the conference there would be no people.

I remember a few years ago at a EuroPython in Birmingham, in the morning I met someone for whom it was their first conference. Towards the end of the first day I spotted them again and they looked very tired, I asked them how many talks this person had been to and they replied “all of them”. “Beginner mistake!” I replied, and explained that if you went to all the talks that you could then you would be very tired, your brain would fill up and overflow, and in any case “the corridor track” was where it was at.

This year I think I tried for about 50:50 talks and “corridor track”. If you’re not familiar with the term “corridor track” it means all the talking and conferring that goes on in the corridor, outside the scheduled talks. If you think about it, it’s really the only place you can properly confer with someone, which is the whole point of a conference. But this blog article is about the talks.

I went to the @ntoll and @teknoteacher show who were cajoling and encouraging the teachers in the education track, being generally enthusiastic about all things Python and all things education, and introducing developers to teachers. In a similar vein I went to (stumbled into would perhaps be more accurate) MissPhilbin‘s “colouring in” class. It was actually a practical hands on workshop using the Pi Foundation’s new Sense Hat. I was impressed at the mix of skills needed (bit of design and colouring in of pixel art, bit of colour theory, bit of programming, and interaction design). More of that please.

Daniele Procida’s keynote “All I Want Is Power” was very smoothly delivered and mixed light moments with really serious commentary. In an open source world the power is knowledge and doing (the “do”-ocracy, remember?), and it’s lying around for the taking. So we should take it and begin the revolution. Something like that anyway.

I might not have agreed with everything Owen Campbell said in his “Leadership of Technical Teams” talk, but the subject is important. I don’t think programmers as a whole talk enough about people skills, or indeed all the other things we do that aren’t programming. So this was a welcome talk, and certainly had good points to make, and a useful diagram for framing expertise (page 17 of the slides). In the area of leading technical teams, Owen has much more expertise and experience than me, so it doesn’t matter that I disagree, he would ignore me anyway. :)

I finished the day with the Safe and Fiona show. Talking about their experiences of making cross platform (meaning mobile platform these days) games and finding and building the tools and frameworks to make them. The quest ends with using Kivy and building Myrmidon. It was a talk in two parts, Safe delivering part one and Fiona delivering part two, and I thought that worked quite well.

Poster Session

Another early start on Saturday (I was driving from Sheffield), where after breakfast I watched Simon Sheridan’s talk, “Landing on a comet”. I was a little late, so if Simon started by explaining what his Ptolemy experiment had to do with Python, then I missed it. In any case, no matter about the Python connection, the talk was really interesting. Science, comets, serendipitous mishaps, all good fun. Because I was slightly late, I watched this from the overflow room, which I thought worked rather well. The audio and video feeds from the main room were brought into a side room where you could watch on the projector. I liked it.

Zeth is one of those people who seems to have only a single name. Just Zeth. His talk on JSON and what to do when you want to add a bit of structure to JSON was somewhat misleadingly title “JSON and the daughters of Pelias”. I didn’t get the reference to the ancient greek Jason, husband of Medea, until Zeth had practically spelled it out for me. Adding type systems to JSON is always good fun.

I wandered out to get a cup of tea, then realised I was too late to rejoin to see @helenst‘s talk on using Python mocks. Due to an all too common problem getting the laptop to talk to the main projector, the talk was late starting, so I did in fact manage to sneak in. Despite having to accelerate the talk to make it 10 minutes shorter, Helen’s talk was well delivered and very practical. I’ve used mocks a bit in the past and experience many of the same situations and experiences that Helen had, so I found a lot to sympathise with.

Just before lunch Gemma Hentsch revealed why she has an unhealthy love of tests. I think all programmers are a little bit obsessive about something, so it’s nice to see this come out in a talk.

Thanks to @morty_uk I found an almost secret staircase that led to the poster session. I think this was new this year and it was most welcome.

After lunch I went to @flubdevork‘s talk of packaging with Conda. Anaconda is pretty popular in scientific circles and uses Conda. So I think this is something that I’m going to be seeing more of. I also, between the two lightning talk sessions, managed to see Tim Golden talk, very rapidly, on his experiences using pgzero and Raspberry Pi to teach a small group of young teenagers.

Ah the lightning talks. At my first conference I had no idea what lightning talks were, now I see them as one of the best parts of the conference. People stand on stage in front of the big hall and talk for 5 minutes. Lightning Talk Man comperes and provides witty German jokes. I think we collectively must be getting better at lightning talks, because they were mostly pretty slick. It was especially exciting to see the kids do their own lightning talks earlier in the afternoon. Children just seem to have an infectious enthusiasm when they get into something, and it is really encouraging to see them getting coding.

In between all this, I did manage to chat to a few people and share in the communion of tea and cake.

classy enumerations


An enumeration is a term that usually refers to a type consisting of a finite number of distinct members. The members themselves can be tested for equality, but usually their particular value is not important.

Maybe you’re modelling a Sphex wasp and you have a state variable with values NOTNEARHOME, JUSTLEFTHOME. You could represent that with an enumeration.

In C the enum keyword assigns (usually small) integers to constant identifiers. It is problematic, chiefly because the members of the enumeration are really just integers. After enum { Foo; }, then code like Foo / 5 is valid (note: valid, not sensible).

In Python you could do essentially the same thing:


if self.state == NOTNEARHOME:
    if 'spider' in self.inventory:
        # head towards home
        # look for juicy spiders

You do see this style (ast.PyCF_ONLY_AST Note 1), but it has the same problems as enum in C. The values are just integers (so, for example, print self.state will print 0, or 1).

You could use strings (like decimal.ROUND_HALF_EVEN):

NOTNEARHOME = 'notnearhome'
# and so on...

That’s better because now I might have a clue if a print out a value and it’s 'notnearhome', but it’s only a little bit better, because you still might accidentally use the value innappropriately (opt = decimal.ROUND_HALF_EVEN; opt += 'foo').

I have a proposal:

Use a class!

class NOTNEARHOME: pass         # Note 2
class JUSTLEFTHOME: pass

Let’s call this classy enumerations.

Classy enumerations have the advantage that we don’t need to manually assign numbers or strings. Values like Mood.PUZZLED and Mood.CONFUSED (which are actually classes) will be unique, so can be tested using == or is correctly.

With classy enumerations we get an error if we accidentally use them in an expression:

Traceback (most recent call last):
  File "", line 1, in 
TypeError: unsupported operand type(s) for +: 'classobj' and 'int'

And to wrap up:

class True: pass
class False: pass

This article was inspired by looking at some of Peter Waller‘s code who seems to have invented the idea of using classes for enumerations.

Note 1

Yes this value matches a value in the C header file. Maybe that has some merit, but it doesn’t make for a very Pythonic interface.

Note 2

The body of a class definition is just a block of code. When that body is just a simple statement, it can share the line with the class declaration. Hence, class NOTNEARHOME: pass is a compact complete class definition. If you’re in a mood for documentation, replace “pass” with a docstring.

Explaining p += q in Python


If you’re a Python programmer you should know about the augmented assignment statements:

i += 1

This adds one to i. There is a whole host of these augmented operators (-=, *=, /=, %= etc).

Aside: I used to call these assignment operators which is the C terminology, but in Python assignment is a statement, not an expression (yay!): you can’t go while i -= 1 (and this is a Good Thing).

An augmented assignment like i += 1 is often described as being the same as i = i + 1, except that i can be a complicated expression and it will not be evaluated twice.

As Julian Todd pointed out to me (a couple of years ago now), this is not quite right when it comes to lists.

Recall that when p and q are lists, p + q is a fresh list that is neither p nor q:

>>> p = [1]
>>> q = [2]
>>> r = p + q
>>> r
[1, 2]
>>> r is p
>>> r is q

So p += q should be the same as p = p + q, which creates a fresh list and assigns a reference to it to p, right?

No. It’s a little bit tricky to convince yourself of this fact; you have to keep a second reference to the original p (called op below):

>>> p = [1]
>>> op = p
>>> p += [2]
>>> p
[1, 2]
>>> op
[1, 2]
>>> p is op

Here it is in pictures:

Because of this, it’s slightly harder to explain how the += assignment statement behaves. For numbers we can explain it by breaking it down into a + operator and an assignment, but for lists this explanation fails because it doesn’t explain how p (in our p += q example) retains its identity (the curious will have already found out that+= is implemented by calling the __iadd__ method of p).

What about tuples?

When p and q are tuples the behaviour of += is more like numbers than lists. A fresh tuple is created. It has to be, since you can’t mutate a tuple.

This kind of issue, the difference between creating a fresh object and mutating an existing one, lies at the heart of understanding the P languages (perl, Python, PHP, Ruby, JavaScript).

The keen may wish to fill in this table:

p q p + q p += q
list list fresh p mutated
tuple tuple fresh fresh
list tuple ? ?
tuple list ? ?

Python dictionary: iterating and adding


On one of my earlier articles about Python dictionaries Jay asks (I paraphrase): What if you want to add items to a dictionary whilst iterating? And what if you want those items to appear in the iteration? Python’s standard dictionary iterators won’t help you.

Here’s a plan:

  1. keep a set, visited of keys that you have already visited;
  2. get the set of keys from the dictionary and subtract visited, leaving you with a set of keys not yet visited, todo;
  3. visit each of the keys in todo;
  4. now we have visited each of the keys in todo, add those keys to visited;
  5. during the iteration the set of keys in the dictionary may have changed, so repeat from (2), stopping when todo is empty.
def iterall(adict):
    """Iterate over all the items in a dictionary,
    including ones which are added during the iteration.
    visited = set()
    while True:
        todo = set(adict.keys()) - visited
        if not todo:
        for k in todo:
            if k in adict:
                yield k, adict[k]
        visited |= todo

Here it is in use:

>>> d=dict(a=1,b=2)
>>> for k,v in iterall(d):
...     print k,v
...     d['c'+k[0]] = 'new'
a 1
b 2
cb new
ca new
cc new

Python’s set datatype makes this easy to program and it has reasonable performance.

Obligatory Python whinge: The function implemented above, iterall, is effectively a new method that I’ve implemented for the dict class. But I can’t add a new method to a class that someone else implemented. Unlike, say, Common Lisp, Dylan, Smalltalk, and Objective-C, where I can.

Python: a string indexing trick


Let’s say you want to extract all the lines of a file where the character at index 5 is not a space:

with open('') as f:
  for line in f:
    if line[5] != ' ':
      print line[:-1]

Problem: short lines. line[5] might not be a valid index. Exception.

You might consider line[5:6]. Normally this is the same as line[5], but when line is short then instead of raising an Exception it evaluates to the empty string, "".

In this case it will result in short lines being selected and printed. If the test was if line[5:6] in string.letters then it will result in short lines being rejected. But that changes the semantics of the test for the lines that are long enough. If I had written if ' '.startswith(line[5:6]) then I keep the same test semantics for long lines and I reject short lines. It’s weird and not clear though.

Sometimes this trick may be appropriate, and you may be able to tweak your test to accommodate it. Don’t let the clarity slide. There is a great deal of merit to: if len(line) > 5 and line[5] != ' '.

Python: Getting FASTA with itertools.groupby


I’d like to introduce you to my new best friend: itertools.groupby.

Let’s say you want to do some processing of the genes in Danio rerio (zebrafish). Here’s the Ensembl project’s FASTA file containing all the peptide sequences:

(After I’ve decompressed it) the file looks like this:

>ENSDARP00000100686 pep:known scaffold:Zv8:Zv8_NA5235:758:1999:1 gene:ENSDARG00000076631 transcript:ENSDART00000111668
>ENSDARP00000094838 pep:known scaffold:Zv8:Zv8_NA8956:1000:1382:1 gene:ENSDARG00000070696 transcript:ENSDART00000104063
>ENSDARP00000102657 pep:known scaffold:Zv8:Zv8_NA8774:1123:1377:-1 gene:ENSDARG00000078974 transcript:ENSDART00000114102

It’s a list of peptide sequences, with each sequence consisting of a header line beginning with ‘>’:

>ENSDARP00000100686 pep:known

The header contains identification information (and is probably too long for my blog); following the header is a number of lines of peptide data (using IUPAC/IUBMB amino acid abbreviations):


The job of itertools.groupby(iterable, key=fn) is to group together the items in iterable that have the same key. For example if you have a sequence of e-mail addresses, and you use a key function that extracts the domain part of the e-mail address (def domainkey(m): return m.partition('@')[-1]) then each group that you get out of itertools.groupby will have e-mails that share the domain part after the ‘@’.

It is usual for the data sequence passed to itertools.groupby to be sorted, but it doesn’t have to be, and itertools.groupby won’t sort it. This is useful. For example, returning to our FASTA file, if we have a key function that simply identifies a line as a header or not:

def isheader(line):
    return line[0] == '>'

Then when we use this in a call to itertools.groupby the groups will simply alternate between a group of header lines, and a group of non-header lines. Like this (The printout is truncated on the right so that I had some space to write in):

(Note that all of the “header” groups are just one line long, since a sequence has only one header line)

Since the groups just alternate header, non-header, the number of groups is twice the number of sequences:

import itertools
with open('Danio_rerio.Zv8.56.pep.all.fa') as f:
  print len(list(itertools.groupby(f, key=isheader)))//2

The above code tells me the file has 28,630 peptide sequences. Of course, using len and list is a bit naughty; len doesn’t work on iterables so I had to convert it to a list, not good (on memory usage). Could’ve used sum, a generator expression, and a bit of Iverson’s Convention: sum(g for g,_ in itertools.groupby(open('Danio_rerio.Zv8.56.pep.all.fa'), key=isheader)).

The useful thing about itertools.groupby is that it lets you deal with the header and the peptide sequence as a single group, without having to worry about detecting the start of the next sequence or handling end-of-file correctly.

Say you wanted to make a dictionary that mapped from Ensembl identifiers to the peptide sequence itself:

def aspairs(file):
  for header,group in itertools.groupby(f, isheader):
    if header:
      line =
      ensembl_id = line[1:].split()[0]
      sequence = ''.join(line.strip() for line in group)
      yield ensembl_id, sequence

with open('Danio_rerio.Zv8.56.pep.all.fa') as f:
  d = dict(aspairs(f))

Notice how ensembl_id is set when a header line is read, but only used on the next spin through the loop when the sequence group is processed (in the else).

Filtering is easy too. Well, for one thing you can write a filter/comprehension on the output of aspairs, but say you didn’t want to do that, and you wanted to filter in the loop itself. You need another variable, set when the header is processed, that says whether we want to process (and yield) the sequence. Say we’re interested in only the novel genes (the second word of the header is “pep:novel”):

def aspairsnovel(file):
  for header,group in itertools.groupby(f, isheader):
    if header:
      word = line[1:].split()
      ensembl_id = word[0]
      isnovel = word[1] == 'pep:novel'
    elif isnovel:
      sequence = ''.join(line.strip() for line in group)
      yield ensembl_id, sequence

Why do I mention using itertools.groupby at all? Because if you try processing a FASTA file using a for loop it gets a bit awkward:

def aspairs(file):
  seq = ''
  for line in file:
    if isheader(line):
      if seq:
        yield ensembl_id, seq
      ensembl_id = line[1:].split()[0]
      seq = ''
      seq += line.strip()
  yield ensembl_id, seq

The code to emit the (id,sequence) pair occurs when a new header line is detected (inside the if isheader, above), and it has to have a special case for the first header line in the file (because there is no sequence to emit). That code to emit the (id,sequence) pair also has to appear after the for loop to cope with the last sequence in the file. The variable used to accumulate the sequence data, seq, has to be reset in two different places: once before the loop begins, and then inside the loop every time we see a header line.


Using a while loop is even worse because now you have to detect end-of-file as well.

The idea of using itertools.groupby to parse files like this came out of a discussion between me and Nick Barnes while we were coding on Clear Climate Code.

Step 2 of the GISTEMP algorithm begins by reading the Ts.txt which looks like this:

  495-11424037187400101286BEAVER MINES,AL                     
1935  -80    4  -50  -16   69  113  153  136  107   38 9999  -11
1936  -89 -240  -37   28  116  131  172  145  107   73   16  -73
1937 -183 -114  -24   45   92  120  153  138  103   76  -21  -53
1938  -28 -110   -4   30   84  133  158  132  147   76  -21  -38
[...more records here...]
  495-11404037187400201189PINCHER CREEK A,AL                  
1979 9999 9999 9999 9999 9999  145  168  164  139   80  -11  -13
1980 -113  -49  -27   76  110  129  166  131  113   82   14  -73
1981   -3  -24   24   57   92  115  155  177  126   55   32  -54
[...more records here...]

You can see the structural similarity to FASTA format. In this case we have sequences of temperature records with each block starting with a header line that has information about the station.

Nick had the idea of using itertools.groupby and I had the idea of using key=lambda line: line[0]==' ' (which is essentially the version of isheader that you need for Ts.txt files). If you like you can compare read_text from with its counterpart, text_to_binary, from the earlier version. Although you should note that this is not strictly a fair comparison because the earlier version is very close the original Fortran, and there are more changes than just having the idea of using itertools.groupby.

The application to FASTA files was inspired by my reading Mitchell Model’s «Bioinformatics Programming Using Python».

Bonus exercise (for the diligent reader): How bad is it to convert the output of itertools.groupby to a list, like I do in my counting example: len(list(itertools.groupby(f, key=isheader)))//2 ?

Python: slicing with zip


Wherein I feel compelled to write some more on Python code that I find more amusing than clear.

The more I use zip the more I love it. I’m thinking about writing a tutorial on how to (ab-) use zip, but for now just this recent discovery.

Say you have two iterators that each yield a stream of objects, iland and iocean (they could be gridded temperature values, say), and you want to get the first 100 values from each iterator to do some processing, whilst not consuming any more than 100 values. You can’t go list(iland)[:100] because that will consume the entire iland iterator and you’ll never be able to get those values past the 100th again.

You can use itertools (probably my second favourite Python module):

land100 = list(itertools.islice(iland, 100))
ocean100 = list(itertools.islice(iocean, 100))

It seems a shame to mention islice and 100 twice. One could use map with a quick pack and unpack, but this is not clear:

land100,ocean100 = map(lambda i: list(itertools.islice(i, 100)), (iland,iocean))

(a simple form of this, which I do sometimes use, is x,y = map(int, (x,y)))

What about giving some love to zip? It turns out that zip will stop consuming as soon as any argument is exhausted. So

zip(range(100), iland, iocean)

returns a list of 100 triples, each triple having an index (the integer from 0 to 99 from the range() list), a value from the iland iterator, and a value from the iocean iterator. And as soon as the list produced by range(100) is exhausted it stops consuming from iland and iocean, so their subsequent values can be consumed by other parts of the program.

And yes, this seems to work by relying on a rather implementation specific feature of zip that I’m not sure should be set in stone.

That zip form above is all very good if one wants to go for n,land,ocean in ..., but what if we want the 100 land values and 100 ocean values each in their own list (like the code at the beginning of the article)? We can use zip again!

_,land100,ocean100 = zip(*zip(range(100), iland, iocean))

zip(*thing) turns a list of triples into a triple of lists, which is then destructured into the 3 variables _ (a classic dummy), land100, and ocean100.

Don’t worry, the actual code use the islice form from the first box because I think it’s the clearest.

My EuroPython talk: Python Sucks!


On 2009-07-01 I gave a talk at EuroPython called “Python Sucks!”. They made me change the title of the talk, but the first slide sticks, so “Python Sucks!” it is.

It was a bit of a misleading title. As I did actually mention some things that I like about Python.

The slides (updated in blue to add useful things that people said in the talk itself) are available in PDF. I’m not sure the slides are particularly useful without a transcript; it’s not always clear if the point illustrated on the slide is something that I think is a good thing, or a bad thing.

I was a bit overwhelmed with the talk actually. I was thinking that, as I am not as famous as Bruce Eckel or Tony Hoare, about 30 people would turn up; and I think I could probably wrangle about that many people. The Recital Hall holds about 150 people, and it was pretty much full. *gulp*.

The audience included Tony Hoare (Man of Science); when I spotted that I sort of thought “oh no, Tony Hoare is in the house, I’d better behave now”. He usefully (and somewhat embarrassingly on my part) suggested that Occam be added to the cloud of “languages I don’t know enough about”. And it should.

One of the points of the talk was to get the audience talking; I think I did okay at providing a very lively forum for people, not just me, to get their python gripes off their chest. The contributions from members of the audience were well appreciated, and often informative. Certainly I felt that the audience provided a useful contribution, which of course made my job easier.

Later on in his keynote Tony Hoare said something like “from what I’ve seen here today you are doing a good job” [of being scientific engineers, or of steering Python, or something]. I hope he wasn’t referring to me.

Note to self: do not show a slide with “distutils” on it to 150 people. Unless you have nothing to say.

Python: dictionary of functions


On twitter recently I was wondering what the best way to create a dictionary of functions in Python was. On a suggestion of Paul Hankin’s I looked into classes and metaclasses. The most direct way I discovered is to use a class; metaclass not required:

class foo:
  def bar(): pass
  def zon(x): return 1+x
fundict = foo.__dict__

This is direct, but not at all obvious. Hint to people that are mystified already: the __dict__ attribute of the class is a dictionary containing everything defined in the class.

fundict is now a dictionary that contains the two functions bar and zon.

I made this discovery a few days before EuroPython, and was fortunate enough to bump into Bruce Eckel in the hallway at EuroPython just before he gave his metaprogramming talk, so I showed it to him in what I call my “naughty decorator” form:

def asdict(x):
  return x.__dict__

class baz:
  def bar(): pass
  def zon(x): return 1+x

(By the way, one of the things I learnt at EuroPython is that the crappy decorator syntax gives Java refugees a warm fuzzy feeling).

Bruce pronounced this “not as naughty as you imply” and “worth showing” (this blog post was half-written before I bumped into Bruce, but he is definitely encouraging me).

It’s worth showing for the following reason:

The functions you get from the class’s __dict__ dictionary are not the same as the methods you get by accessing the attributes of the class. In other words « is not foo.__dict__['bar']». I was surprised by this, and so was Bruce.

As well being a little weird, it makes a compact example to show the different between a function, an unbound method, and a bound method.

I hope I don’t need to introduce a function. It’s just a thing you call with some arguments. Where it differs from a method is that a method is regarded as a message sent to an object, and receives that object as its first argument.

In Python, an unbound method is a method that isn’t associated with any particular object; it requires an object as its first argument (and Python loses big here, by requiring the object to be an instance of the class that defined the method). A bound method is a method already associated with an object; that object becomes the method’s first argument when the bound method is invoked with the remaining arguments.

So if we define a class foo (as we did, above), then: is an unbound method;

foo().bar is a bound method (bound to the object we just created by invoking the foo class); and,

foo.__dict__[‘bar’] is a function.

This last fact was a great surprise to me. I had expected it to be an unbound method, and thought that my naughty decorator would have to have some hacky code to dig the function out of the unbound method. But it doesn’t.

Tiny problem: Using the asdict gives a dictionary that contains __module__ and __doc__ keys. Solution: another decorator:

def cleandict(x):
  for k in ('__module__', '__doc__'):
    del x[k]
  return x

class baz:
  def bar(): pass
  def zon(x): return 1+x

EuroPython 2009


Overall I had a really enjoyable time, met lots of interesting people, some new and some renewed friendships, and learned some good stuff.

Too many web frameworks. Too many VMs.

What I like is the diverse range of applications to which people put Python. The Guardian use it so we can inspect our MP’s expenses. Gregor Lindl uses it and Papert’s turtle graphics to teach. The Deutsches Zentrum für Luft- und Raumfahrt use it to manage the extraction and shuffling of petabyte (that’s 10**15!) datasets across the supercomputing networks used by climate scientists. The talented Stani Michiels creates the new Dutch Euro coin designs using Python. Weather trading derivatives. Video workflows. Collaborative mathematics. Games. Academic document archives. Billing. System Administration. The list goes on.

Naturally I managed not to go to most of those talks (apart from keynotes (like Tony Hoares) and lightnings, I went to 2 talks). That’s partly because going to 4 or 5 talks a day (which would easily have been possible given the packed schedule) makes you dead tired and causes leaky brain; partly because there were timetable clashes! But mostly because I was writing the materials for my two talks.

Writing your talk at the conference itself is… exciting. And intense. And it probably gives the conference organisers the willies (as in, “where are your slides?”). It did mean that I was able to include some stuff from the conference itself in my talks. A war story I picked up in hallway chat about having to use 6 year old versions of Python on a government IT project made it into my “Loving Old Versions”” talk. Dr Sue Black’s talk about Bletchley Park was in the timetable, and that prompted me to put a reference to Turing’s coffee mug into my “Python Sucks!” talk (Sue Black got there first with the same anecdote; ah well). The “Python Sucks” talk also got a couple of references to Bruce Eckel’s keynote which covered some of the same ground (just using a lot more space, according to Thomas Guest).

But it’s way stressful (Rob Collins’s massage to raise PSF funds was very helpful in that regard). Next year I’ll leave the laptop at home, and give a presentation using a marker pen (well, I will if they accept it!).