Archive for the 'computing' Category

Go bug my cat!

2015-02-06

Partly related to Francis Irving’s promise to avoid C, and partly related to the drinking at the recent PythonNW social, I wrote a version of /bin/cat in Go.

So far so good. It’s quite short.

The core loop has a curious feature:

for _, f := range args {
	func() {
		in, err := os.Open(f)
		if err != nil {
			fmt.Fprintf(os.Stderr, "%s\n", err)
			os.Exit(2)
		}
		defer in.Close()
		io.Copy(os.Stdout, in)
	}()
}

The curious feature I refer to is that inside the loop I created an anonymous function and call it.

The earlier version of cat.go didn’t do that. And it was buggy. It looked like this:

for _, f := range args {
	in, err := os.Open(f)
	if err != nil {
		fmt.Fprintf(os.Stderr, "%s\n", err)
		os.Exit(2)
	}
	defer in.Close()
	io.Copy(os.Stdout, in)
}

The problem with this is the defer. I’m creating one defer per iteration, and they don’t get run until the end of the function. So they just pile up until main returns. This is bad because each defer is going to close a file. If we try and cat 9999 copies of /dev/null we get this::

drj$ ./cat $(yes /dev/null | sed 9999q)
open /dev/null: too many open files

It fails because on Unix there is a limit to the number of simultaneous open files a process can have. It varies from a few dozen to a couple of thousand. When this version of cat opens too many files, it falls over.

In this case we failed because we ran out of a fairly limited resource, Unix file descriptors. But even without that, each defer allocates a small amount of memory (a closure, for example). So defer in loops requires (generally) a little anonymous function wrapper.

I tried rewriting the loop using a lambda and a tail call (see this git branch), but it doesn’t work. The defers still don’t run promptly. (and the tail call is awkward and I had to declare the loop variable on a separate line from the function itself because the scoping isn’t quite right)

Advertisements

Multiplication, Addition, Counting, and Python

2009-06-01

Jason Dyer’s teasing post got me thinking. About how Python could be used to give some insight into the meta-cognitive aspects of whole number multiplication. Natch.

When children solve a multiplication problem by correspondence, the objects in the multiplier set are mapped over for each object in the multiplicand set (hmm, or is it the other way around?). A typical procedure for multiplying 4 cakes by a price of £2 per cake might be to point to a cake with the left hand and then count up the 2 pounds using the right hand, then move the left hand to the next cake and repeat the count with the right hand, with the oral count continuing up; this is repeated until the left hand has exhausted all cakes.

We can model this in Python with the following program:

def mul(multiplicand, multiplier):
    count = 0
    for i in range(multiplicand):
        for j in range(multiplier):
            count += 1
    return count

Whereas children using repeated addition do something more like this:

def mul(multiplicand, multiplier):
    count = 0
    for i in range(multiplier):
        count = add(count, multiplicand)
    return count

In this case, add is a subroutine. You could define one easily enough in Python or else you could go «from operator import add».

Clearly the second program is more efficient than the first, both as a Python program and as a manual procedure performed by children; it’s more the latter that I’m interested in, I’m using Python to describe the procedures. However, the second procedure requires that add is already adopted as an internal procedure. Of particular note is that, apart from count, the second procedure uses only one state variable, i; the first procedure uses two.

At the stage where multiplication is introduced, many children will not yet be performing addition accurately without counting. Effectively add is not yet available to them as an internal procedure. This is likely to be a problem because this type of learner is unlikely to be able to accurately add the multiplicand to the count without getting confused about where in the multiplication procedure they were.

As an example of what might go wrong, imagine that a learner starts by using the left hand to maintain the i variable in the second procedure (above); this hand will count from 1 to multiplier (hmm, there’s a small sleight of hand going on here, the Python counts from 0 to n-1 whereas most learners will prefer count from 1 to n). The count will be maintained orally (in other words by speaking out the successive multiples of the multiplicand). Begin by raising one finger of the left hand and uttering the multiplicand (the initial count). Now we need to add the multiplicand to the oral count. Maybe the learner can do that without using their fingers, maybe they can’t; in any case depending on the parameters chosen, at some point some learners may need to use their hands to perform the addition. So then the procedure for addition kicks in as the learner adds the multiplicand to the current count. Many learners will have personal procedure for addition that requires both hands. The addition may be performed accurately, but the i state variable will be lost. We lose track of where we were in the multiply procedure.

If the learner can accurately add the multiplicand to the count mentally, then they stand a much better chance of performing the second procedure. This is what I mean by having add available as an internal procedure.

The first procedure can be thought of as a way of simultaneously arranging to perform the multiplication and the additions required by the multiplication without having any state variables overlap. Thereby minimising the chance that confusion will result. Most learners will be capable of keeping track of the required state variables to perform the multiplication, but if left to their own devices may choose methods where the state variables overlap (in other words, they run out of hands). Thus, they can benefit by being guided towards a procedure which they can manage.

Another way to think about this is that at the sort of age where children begin to learn to multiply, their procedure for addition is leaky. It is not fully abstracted; performing addition may require the use of hands (for state variables), making the addition leak over any other use of the same hands to maintain state.

It seems to me that only when a learner can perform a typical count + multiplicand addition accurately in their head are they ready to perform multiplication as repeated addition.

Oh yeah, the original research on the whole “multiplication is/ain’t repeated addition” debate. It sucks. They test children at times t0 and t1 with a randomly chosen math related intervention in between. It seems to me that a more carefully designed study should have also included a non math-related intervention such as giving the subjects fish-oil pills or teaching them to conga. After all, if I was being tested on one day and then told that I was going to sit a similar test tomorrow, I would bone up on the material before the second test, regardless of what I was being taught. Wouldn’t you? They make no attempt to account for this effect.

Appendix for Python hackers

The first definition of mul that I give is of course completely worthless as a practical implementation in Python. However, note the following: «10*10L» is 100L but «mul(10,10L)» is 100; in other words mul returns an int if it possibly can.

Microsoft’s Wireless Keyboard Encryption

2007-12-07

I have read the recent whitepaper regarding the cracking of some of Microsoft’s keyboards. Here is my summary:

  • 8-bit key;
  • XOR
  • Hohoho.

    That 80’s feeling

    2007-11-23

    There should be a name for that feeling you get when moving house and you discover yet another forgotten microcomputer from the 1980’s. ZX81… BBC B… Commodore 64… BBC Master… Hey a SPARCstation 2! Isn’t that from the 90’s?

    Oh look, another Model B.

    Stupid keyboard, stupid shift; a proposal for better key sensing

    2007-09-04

    On my Apple keyboard (the white wired one, circa 2003, with an ergonomic curve to it) if I press both Shift keys down then it disables most of the QWERTY row. I guess this is some sort of ghosting or masking. Is it useful to be able to separately detect the two Shift keys? Why aren’t they wired on the same row/column intersection? That would eliminate this sort of masking. This makes me think far too hard about keyboard wiring designs and microcontroller pin minimisation. Observation: pin minimisation is the root of all keyboard misdesigns (well except for problems with the actual shape, action, and layout of the keys).

    The Matrix

    Electrically, keyboards are arranged in a matrix of rows and columns with a switch (corresponding to an actual key) at the intersection of each row/column pair. Depressing a key closes the switches and forms an electrical connexion between the row and the column. A microcontroller uses a scanning algorithm to detect key presses: it regularly scans the rows and columns by driving each row bus in turn and sensing each column bus to detect which keys are depressed. Warning: this description is made up by me based on my ancient knowledge of the ZX81 hardware design. I have no idea how modern keyboards work, but it’s probably something very similar. Probably you can get a single chip for USD 1 that does all this driving and sensing of the matrix, has a register and some LED drivers for the Caps Lock light, and a USB interface all in one. I suspect Apple have very little choice in how the electrical wiring of their keyboard is made.

    The number of microcontroller pins I need for a keyboard of N keys is p+q where p×q ≥ N. If we assume that the cost of a microcontroller is proportional to the number of pins it has, then we would do well to minimise the number of pins we require for the keyboard matrix. For a 40 key keyboard (the ZX81) an arrangement of 8 rows and 5 columns is optimal. This comes about because we can only place one key at the intersection of each row and column. If we try and place two or more keys at the same intersection then the scan algorithm can’t differentiate between them.

    However if we can somehow differentiate between two or more keys at the same intersection then we could put more keys per intersection and thereby reduce the number of pins we require.

    Capacitance is Key

    Suppose we put a capacitor inline with each switch. Now when a key is depressed instead of a closed circuit between a row and a column we get a resonant CR circuit. We can still use the scanning algorithm, but instead of driving the row bus with a constant voltage we drive it with a periodic signal. Obviously we should choose the period so that the CR circuit passes it. A 1.5 MHz square wave might be a good choice for a keyboard because we can get this from the clock used for the USB interface (12 MHz divided by 8). If we put different value capacitors on different key switches and drive the row with two different frequencies (1.5 MHz and 0.75 MHz say) then we can now detect two different keys at the same row/column intersection. We scan once at 1.5 MHz to detect one set of keys and scan again at 0.75 MHz to detect another set of keys.

    For our ZX81 example with 40 keys we would need only 20 intersections which we can do with 4 rows and 5 columns, reducing our pin requirements from 13 to 9 pins. Awesome!

    In general we save about 30% going from 1 key per intersection to 2 keys per intersection: we go from about 2×sqrt(N) pins to about 2×sqrt(N÷2) or sqrt(2)×sqrt(N). sqrt(2)÷2 is of course about 0.7.

    More keys per intersection means greater savings, but also requires more frequencies, slower scanning (because each frequency requires an extra pass), and more stringent rejection requirements in the CR circuits.

    For a really cheap and nasty keyboard, instead of having an open/closed switch inline with a capacitor simply make the keybaard so that two metal plates are brought close together when the key is depressed. The two metal plates form a capacitor whose value can be controlled by changing the area of the metal plates. (In fact, don’t really cheap and nasty keyboards use this principal already?) The challenge would be to reliably detect keys with different capacitance.

    Microsoft not patenting enough

    2007-07-31

    Bill, in his BBC article, writes of Microsoft: “In 2004 Ballmer announced that Linux infringed 228 Microsoft patents; in 2007 company lawyer Brad Smith said it was 235.”

    Are Microsoft really seriously suggesting that in 3 years they only developed 7 patents that covered stuff in Linux? Are they asleep at the wheel or something?