Dictionary ordering and floating point


This is an expanded version of an earlier tweet of mine.

Consider the following Python program:

    d = {k:k for k in 'bfg'}

The output of this program changes from run to run:

    drj$ ./d.py 
    {'g': 'g', 'f': 'f', 'b': 'b'}
    drj$ ./d.py 
    {'b': 'b', 'g': 'g', 'f': 'f'}

Dictionaries in Python are implemented using hash tables, so when iterating over a dictionary, the order of the keys is not generally predictable.

You might have known that it wasn’t predictable, but you might not have known that it can change from one Python process to the next. This is a good thing. It avoids a denial-of-service attack that is cross language and cross framework. But don’t worry about that, it’s been fixed since Python 3.3, released over 2 years ago now.

Recently we (at ScraperWiki) came across this in the context of floating point addition. What if the order of a dictionary determines the order of a summation? This program adds up the values of a dictionary:

    d = dict(b=0.2, f=0.6, g=0.7)

It doesn’t always produce the same answer:

    drj$ ./e.py 
    drj$ ./e.py 

This is obviously a very smaller difference, but as it happened we were dynamically generating a web page that showed rounded results. 1.4999999999999998 rounds to 1, 1.5 rounds to 2. You could reload the web page and the value shown would sometimes flip from 1 to 2, and vice versa (well, these aren’t the real numbers, but you get the idea).

There are a few things worth pointing out here. As a convenience instead of writing “3602879701896397 / 18014398509481984″ I’ve written “0.2”, and similarly “5404319552844595 / 9007199254740992″ is written “0.6”, and “3152519739159347 / 4503599627370496″ is written “0.7”.

The second program is just calling sum(), so in this case I don’t get to pick the order of adding up for two reasons: 1) The order of the list that is input to sum() is not chosen by me; 2) The order in which sum() sums things might be different anyway (this turns out to be a surprisingly subtle subject, but I don’t want to get into math.fsum).

Tweets like this: “I don’t care if float addition is associative or not” miss the point I was making, which is that floating point addition is not associative (meaning that (0.2 ⟡ 0.6) ⟡ 0.7) != 0.2 ⟡ (0.6 ⟡ 0.7) where ⟡ means floating point addition). Floating point arithmetic is an abstraction and sometimes you need to understand the abstraction and what lies beneath in order to debug your webpage.

In this case we fixed the problem by avoiding floating point until the very last step. The numbers we were adding were all of the form p/20 (where p is an integer). So doing the adding up in integers and then doing a single division at the end means that we get the exact answer (or the nearest representable floating point number when the exact answer is not representable).

What you need to be a programmer


You need, above all others:

1) to have dogged persistence in the face of failure.

2) to celebrate the utterly ordinary.

On 1. There is a reason why FAIL Blog is a thing. Failure represents 99% of what programmers do. Programmers do not sit down and write flawless code that works first time. We make mistakes. The compiler complains. The tests fail. It stops when something unexpected happens. Each failure requires the programmer to work out why it failed, fix it, and carry on. I’m talking about failures at every level from the “missing comma” and “expected string not int” type errors, through intermediate errors like using feet instead of metres, or using 100GB of RAM instead of 1GB, to higher level errors like designing a mobile phone app instead of a website.

Some people just cannot do this. They will type in their program, and it will fail to compile or run because of a missing comma or something like that, and they will just stop, and go and make a bacon butty instead. This is perfectly understandable.

Personally, as a programmer, it sometimes feels like it’s not that I want the program to run correctly, it’s that I have a curse where I cannot prevent myself from investigating every last failure and fixing it.

When programming, failure is entirely normal. It takes a certain personality type to be able to cope with this, several times a day.

On 2. The flip side is that when programmers succeed, most of the time we succeed in something entirely ordinary. We have calculated the amount of VAT correctly. We have displayed the user’s email address on the page correctly. Things that are entirely trivial and obviously should Just Work take sufficient effort, and involve overcoming several failures, that just achieving the entirely ordinary seems like it deserves a celebration.

Clearly it is not normal for someone to celebrate the fact that as a result of typing stuff into the CSS file, the dotted border between the header of a table and the next row now displays correctly. So remote are programmers from the rich celebrations that life has to offer that we have to make our own.

There is a horrific irony to all this. Computers only do what they are programmed to do. But it is incredibly difficult to program them (correctly). What they are programmed to do is only an impression of what we intend them to do. In this case it is like trying to take an impression of a fossil with only wet tissue. So, on the one hand everything a computer does it has been programmed to do, on the other it is a monumentally difficult task to get the computer to do anything at all that is useful and correct.

Lovelace grasps this subtle point: “It can do whatever we know how to order it to perform”. It’s the “know how” that’s the tricky bit.

Go bug my cat!


Partly related to Francis Irving’s promise to avoid C, and partly related to the drinking at the recent PythonNW social, I wrote a version of /bin/cat in Go.

So far so good. It’s quite short.

The core loop has a curious feature:

for _, f := range args {
	func() {
		in, err := os.Open(f)
		if err != nil {
			fmt.Fprintf(os.Stderr, "%s\n", err)
		defer in.Close()
		io.Copy(os.Stdout, in)

The curious feature I refer to is that inside the loop I created an anonymous function and call it.

The earlier version of cat.go didn’t do that. And it was buggy. It looked like this:

for _, f := range args {
	in, err := os.Open(f)
	if err != nil {
		fmt.Fprintf(os.Stderr, "%s\n", err)
	defer in.Close()
	io.Copy(os.Stdout, in)

The problem with this is the defer. I’m creating one defer per iteration, and they don’t get run until the end of the function. So they just pile up until main returns. This is bad because each defer is going to close a file. If we try and cat 9999 copies of /dev/null we get this::

drj$ ./cat $(yes /dev/null | sed 9999q)
open /dev/null: too many open files

It fails because on Unix there is a limit to the number of simultaneous open files a process can have. It varies from a few dozen to a couple of thousand. When this version of cat opens too many files, it falls over.

In this case we failed because we ran out of a fairly limited resource, Unix file descriptors. But even without that, each defer allocates a small amount of memory (a closure, for example). So defer in loops requires (generally) a little anonymous function wrapper.

I tried rewriting the loop using a lambda and a tail call (see this git branch), but it doesn’t work. The defers still don’t run promptly. (and the tail call is awkward and I had to declare the loop variable on a separate line from the function itself because the scoping isn’t quite right)



Valve’s steam appears to be a package manager for installing Valve software (games). Part of steam on Linux is a shell script: steam.sh.

It turns out, if you’re not careful, if you try and uninstall steam or something… then this innocent 600 line shell script can kind of accidentally DELETE ALL YOUR USER FILES. Ha ha.

Much hilarity in the github issue.

At core the proximate issue is executing this command:

	rm -rf "$STEAMROOT/"*

The problem is that, perhaps in mysterious circumstances, STEAMROOT can be set to the empty string. Which means the command rm -fr "/"* gets executed. Which removes all the files that you have access to on the system (it might take its time doing this).

I’m working off this version of steam.sh.

First off, it’s 600 lines long. That, right there, should set the alarm bells ringing. No shell script should be that long. It’s just not a suitable language for writing more than a few lines in.

set -u, whilst a good idea in a lot of scripts, would not have helped here. As it happens, STEAMROOT is set, but set to the empty string.

"${STEAMROOT:?}", as suggested by one of the commentor’s in github, would have helped. The script will exit if STEAMROOT is unset or set to the empty string.

Immediately before this code there is a comment saying “Scary!”. So that’s another thing. If one programmer thinks the code is scary, then we should probably review the code. And make it less scary. Clearly adding an explicit check that STEAMROOT is set would have helped make it less scary.

It would also be a good idea to add a -- argument to rm to signify the end of the options. Otherwise if STEAMROOT starts with a «-» then it will trigger rm into thinking that it is an option instead of the directory to delete. So we should write:

    rm -fr -- "${STEAMROOT:?}"/*

STEAMROOT is assigned near the beginning of the file:

STEAMROOT="$(cd "${0%/*}" && echo $PWD)"

It is often problematic to use command substitution in an assignment. The problem being that the command inside the round brackets, cd "${0%/*}" && echo $PWD in this case, could fail. The shell script still carries on and assigns the stdout of the command to the variable. And if the command failed and produced no output then STEAMROOT will become the empty string.

Here would be a good place to explicitly check that STEAMROOT is not an empty string. : "${STEAMROOT:?}" will do, but if [ -z "$STEAMROOT" ] ; then exit 99; fi is more explicit.

set -e would have helped. If a command substitution is assigned to a variable and the command fails (exit code != 0) then the assignment statement fails and that will trigger set -e into exiting the script. It’s not ideal error checking, but it is better than nothing.

The code, as described by the comment above it, is trying to find out the location of the script. This is often problematic. There’s no portable way to find out. But as long as you’re in bash, and the script is explicitly is a bash script and uses various bashims, why not just use the relatively straightforward DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) as recommended by this Stack Overflow answer. No need to pretend that $0 is set to anything useful. (all of the above still applies though)

The steam.sh script is a bit enigmatic. Bits of it are written by someone who clearly knows shell scripting. The "{$0%/*"} thing to strip off the last component of a path is not common knowledge. But why not use dirname as the code later on in the script does? Correctly uses the portable equality operator single «=» in code like if [ "$STEAMEXE" = "steamcmd" ], but later on uses the bashism «==» and «[[». Clearly knows about the $( ... ) notation for command substitution, but then uses legacy (and yukhy) backquote syntax elsewhere. Carefully avoids using dirname (in the POSIX standard, and therefore very likely to be installed on any given Unix system), but then uses curl without checking (and curl isn’t installed on Ubuntu by default).

In summary: too long; attempting to locate directory containing script is problematic; doesn’t do enough checking (in particular, set -e).

shell booleans are commands!


How should we represent boolean flags in shell? A common approach, possibly inspired by C, is to set the variable to either 0 or 1.

Then you see code like this:

if [ $debug = 1 ]; then

or this example from zgrep:

if test $have_pat -eq 0; then

there is nothing special about 0 and 1, they are just two strings for repreresenting “the flag is set” and “the flag is unset”.

Test for strings is surprisingly awkward in shell. In Python you can go if debug: .... It would be nice if we could do something similar in shell:

if $debug ; then

Well we can. In a shell if statement, if thing, the thing is just a command. If we arrange that debug is either true or false, then if $debug will run either the command true or the command false.

debug=true # sets flag
debug=false # unsets flag

I wish I could remember who I learnt this trick off because I think it’s super cool, and not enough shell programmers know about it. true and false are pretty much self explanatory as boolean values, and no extra code is needed because they already exist as shell commands.

You can also use this with &&:

$debug && stuff

Sometimes shell scripts have a convention where a variable is either unset (to mean false) or set to anything (to mean true). You can convert from this convention to the true/false convention with 2 lines of code:

# if foo is set to anything, set it to "true"
# if foo is the empty string, set it to "false"

bash functions: it is mad!


bash can export functions to the environment. A consequence of this is that bash can import functions from the environment. This leaves us #shellshocked. #shellshock aside, why would anyone want to do this? As I Jackson says: “it is mad”.

Exporting bash functions allows a bash script, or me typing away at a terminal in bash, to affect the behaviour of basically any other bash program I run. Or a bash program that is run by any other program.

For example, let’s say I write a program in C that prints out the help for the zgrep program:

#include <stdlib.h>

int main(void)
    return system("zgrep --help");

This is obviously just a toy example, but it’s not unusual for Unix programs to call other programs to do something. Here it is in action:

drj$ ./zgrephelp
Usage: /bin/zgrep [OPTION]... [-e] PATTERN [FILE]...
Look for instances of PATTERN in the input FILEs, using their
uncompressed contents if they are compressed.

OPTIONs are the same as for 'grep'.

Report bugs to <bug-gzip@gnu.org>.

Now, let’s say I define a function called test in my interactive bash session:

test () { bob ; }

This is unwise (test is the name of a well know Unix utility), but so far only harmful to myself. If I try and use test in my interactive session, things go a bit weird:

drj$ test -e /etc/passwd
The program 'bob' is currently not installed. You can install it by typing:
sudo apt-get install python-sponge

but at least I can use bash in other processes and it works fine:

drj$ bash -c 'test -e /etc/passwd' ; echo $?

What happens if I export the function test to the environment?

drj$ export -f test
drj$ ./zgrephelp
/bin/zgrep: bob: command not found
/bin/zgrep: bob: command not found
/bin/zgrep: bob: command not found
/bin/zgrep: bob: command not found
/bin/zgrep: bob: command not found
gzip: /bin/zgrep: bob: command not found
--help.gz: No such file or directory
/bin/zgrep: bob: command not found
Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.
/bin/zgrep: bob: command not found
/bin/zgrep: bob: command not found
/bin/zgrep: bob: command not found

zgrephelp stops working. Remember, zgrephelp is written in C! Of course, zgrephelp runs the program zgrep which is written in… bash! (on my Ubuntu system).

Exporting a function can affect the behaviour of any bash script that you run, including bash scripts that are run on your behalf by other programs, even if you never knew about them, and never knew they were bash scripts. Did you know /bin/zcat is a bash script? (on Ubuntu)

How is this ever useful? Can you ever safely export a function? No, not really. Let’s say you export a function called X. Package Y might install a binary called X and a bash script Z that calls X. Now you you’ve broken Z. So you can’t export a function if it has the same name as a binary installed by any package that you ever might install (including packages that are you never use directly but are installed merely to compile some package that you do want to use).

Let’s flip this around, and consider the import side.

When a bash script starts, before it’s read a single line of your script it will import functions from the environment. These are just environment variables of the form BASH_FUNC_something()=() { function defintion here ; }. You don’t have to create those by exporting a function, you can just create an environment variable of the right form:

drj$ env 'BASH_FUNC_foo()=() { baabaa ; }' bash -c foo
bash: baabaa: command not found

Imagine you are writing a bash script and you are a conscientious programmer (this requires a large amount of my imagination). bash will import potentially arbitrary functions, with arbitrary names, from the environment. Can you prevent these functions being defined?

It would appear not.

Any bash script should carefully unset -f prog for each prog that it might call (including builtins like cd; yes you can define a function called cd).

Except of course, that you can’t do this if unset has been defined as a function.

Why is exporting functions ever useful?

How to patch bash


3rd in what is now becoming a series on ShellShock. 1st one: 10 tips for turning bash scripts into portable POSIX scripts; 2nd one: why big is bad.

ShellShock is a remote code exploit in /bin/bash (when used in conjunction with other system components). It relies on the way bash exports functions to its environment. If you run the command «env - bash -c 'foo () { hello ; } ; export -f foo ; env'» you can see how this works ordinarily:

foo=() { hello

The function foo when exported to the environment turns into an environment variable called foo that starts with «() {». When a new bash process starts it scans its environment to see if there are any functions to import and it is that sequence of characters, «() {», that triggers the import. It imports a function by executing its definition which causes the function to be defined.

The ShellShock bug is that there is a problem in the way it parses the function out of the environment which causes it to execute any code after the function definition to be also executed. That’s bad.

So the problem is with parsing the function definition out of the environment.

Fast forward 22 years after this code was written. That’s present day. S Chazelas discovers that this behaviour can lead to a serious exploit. A patch is issued.

Is this patch a 77 line monster that adds new functionality to the parser?

Why, yes. It is.

Clearly function parsing in bash is already a delicate area, that’s why there’s a bug in it. That should be a warning. The fix is not to go poking about inside the parser.

The fix, as suggested by I Jackson, is to “Disable exporting shell functions because they are mad“. It’s a 4 line patch (well, morally 4 lines) that just entirely removes the notion of importing functions from the environment:

--- a/variables.c
+++ b/variables.c
@@ -347,6 +347,7 @@ initialize_shell_variables (env, privmode)
       temp_var = (SHELL_VAR *)NULL;
+#if 0 /* Disable exporting shell functions because they are mad. */
       /* If exported function, define it now.  Don't import functions from
         the environment in privileged mode. */
       if (privmode == 0 && read_but_dont_execute == 0 && STREQN ("() {", string, 4))
@@ -380,6 +381,9 @@ initialize_shell_variables (env, privmode)
              report_error (_("error importing function definition for `%s'"), name);
+      if (0) ; /* needed for syntax */
 #if defined (ARRAY_VARS)
       /* Array variables may not yet be exported. */

In a situation where you want to safely patch a critical piece of infrastructure you do the simplest thing that will work. Later on you can relax with a martini and recraft that parser. That’s why Jackson’s patch is the right one.

Shortly after the embargo was lifted on the ShellShock vulnerability and the world got to know about it and see the patch, someone discovered a bug in the patch and so we have another CVN issued and another patch that pokes about with the parser.

That simply wouldn’t have happened if we’d cut off the head of the monster and burned it with fire. /bin/bash is too big. It’s not possibly to reliably patch it. (and just in case you thinking making patches is easy I had to update this article to point to Jackson’s corrected patch)

Do you really think this is the last patch?

Why big is bad


If you know me, you know that I don’t like using /bin/bash for scripting. It’s not that hard to write scripts that are portable, and my earlier “10 tips” article might help.

Why don’t I like /bin/bash? There are many reasons, but it’s mostly about size.

drj$ ls -lL $(which sh bash)
-rwxr-xr-x 1 root root 959120 Sep 22 21:39 /bin/bash
-rwxr-xr-x 1 root root 109768 Mar 29 2012 /bin/sh

/bin/bash is nearly 10 times the size of /bin/sh (which in this case, is dash). It’s bigger because it’s loaded with features that you probably don’t need. An interactive editor (two in fact). That’s great for interactive use, but it’s just a burden for non-interactive scripts. Arrays. Arrays are really super useful and fundamental to many algorithms. In a real programming language. If you need arrays, it’s time for your script to grow up and become a program, in Python, Lua, Go, or somesuch.

Ditto job control.
Ditto Extended Regular Expression matching.
Ditto mapfile.
Ditto a random number generator.
Ditto a TCP/IP stack.

You might think that these things can’t harm you if you don’t use them. That’s not true. We have a little bit of harm just by being bigger. When one thing is 10 times bigger than it needs to be, no one will notice. When everything is 10 times bigger than it needs to be then it’s wasteful, and extremely difficult to fix. These features take up namespace. Got a shell script called source or complete? Can’t use it, those are builtins in bash. They slow things down. Normally I wouldn’t mention speed, but 8 years ago Ubuntu switched from bash to dash for the standard /bin/sh and the speed increase was enough to affect boot time. Probably part of the reason that bash is slower is simply that it’s bigger. There are more things it has to do or check even though you’re not making use of those features.

If you’re unlucky a feature you’ve never heard of and don’t use will interact with another feature or a part of your system and surprise you. If you’re really unlucky it will be a remote code exploit so easy to use you can tweet exploits, which is what ShellShock is. Did you know you can have functions in bash? Did you know you can export them to the environment? Did you know that the export feature works by executing the definition of the function? Did you know that it’s buggy and can execute more than bash expected? Did you know that with CGI you can set environment variables to arbitrary strings?

There are lots of little pieces to reason about when considering the ShellShock bug because bash is big. And that’s after we know about the bug. What about all those features of you don’t use and don’t even know about? Have you read and understood the bash man page? Well, those features you’ve never heard of are probably about as secure as the feature that exports functions to the environment, a feature that few people know about, and fewer people use (and in my opinion, no one should use).

The most important thing about security is attitude. It’s okay to have the attitude that a shell should have lots of useful interactive features; it’s arguable that a shell should have a rich programming environment that includes arrays and hash tables.

It’s not okay to argue that this piece of bloatware should be installed as the standard system shell.

10 tips for turning bash scripts into portable POSIX scripts


In the light of ShellShock you might be wondering whether you really need bash at all. A lot of things that are bash-specific have portable alternatives that are generally only a little bit less convenient. Here’s a few tips:

1. Avoid «[[»


if [[ $1 = yes ]]


if [ "$1" = "yes" ]

Due to problematic shell tokenisation rules, Korn introduced the «[[» syntax into ksh in 1988 and bash copied it. But it’s never made it into the POSIX specs, so you should stick with the traditional single square bracket. As long as you double quote all the things, you’ll be fine.


2. Avoid «==» for testing for equality


if [ "$1" == "yes" ]


if [ "$1" = "yes" ]

The double equals operator, «==», is a bit too easy to use accidentally for old-school C programmers. It’s not in the POSIX spec, and the portable operator is single equals, «=», which works in all shells.

Technically when using «==» the thing on the right is a pattern. If you see something like this: «[[ $- == *i* ]]» then see tip 7 below.

3. Avoid «cd -»


cd -

ksh and bash:

cd ~-

You tend to only see «cd -» used interactively or in weird things like install scripts. It means cd back to the previous place.

Often you can use a subshell instead:

... do some stuff in the original working directory
( # subshell
cd newplace
... do some stuff in the newplace
) # popping out of the subshell
... do some more stuff in the original working directory

But if you can’t use a subshell then you can always store the current directory in a variable:


then do «cd “$old”» when you need to go there. If you must cling to the «cd -» style then at least consider replacing it with «cd ~-» which works in ksh as well as bash and is blessed as an allowed extention by POSIX.

4. Avoid «&>»


ls &> /dev/null


ls > /dev/null 2&>1

You can afford to take the time to do the extra typing. Is there some reason why you have to type this script in as quickly as possible?

5. Avoid «|&»


ls xt yt |& tee log


ls xt yt 2>&1 | tee log

This is a variant on using «&>» for redirection. It’s a pipeline that pipes both stdout and stderr through the pipe. The portable version is only a little bit more typing.

6. Avoid «function»


function foo { ... }


foo () { ... }

Don’t forget, you can’t export these to the environment. *snigger*

7. Avoid «((»


((x = x + 1))


x=$((x + 1))

The «((» syntax was another thing introduced by Korn and copied into bash.

8. Avoid using «==» for pattern matching


if [[ $- == *i* ]]; then ... ; fi


case $- in (*i*) ... ;; esac

9. Avoid «$’something’»





You may not know that you can just include newlines in strings. Yes, it looks ugly, but it’s totally portable.

If you’re trying to get even more bizarre characters like ISO 646 ESC into a string you may need to investigate printf:

Esc=$(printf '\33')

or you can just type the ESC character right into the middle of your script (you might find Ctrl-V helpful in this case). Word of caution if using printf: while octal escapes are portable POSIX syntax, \377, hex escapes are not.

10. «$PWD» is okay

A previous version of this article said to avoid «$PWD» because I had been avoiding it since the dark ages (there was a time when some shells didn’t implement it and some did).


Most of these are fairly simple replacements. The simpler tokenisation that Korn introduced for «[[» and «((» is welcome, but it comes at the price of portability. I suspect that most of the bash-specific features are introduced into scripts unwittingly. If more people knew about portable shell programming, we might see more portable shell scripts.

I’m sure there are more, and I welcome suggestions in the comments or on Twitter.

Thanks to Gareth Rees who suggested minor changes to #3 and #7 and eliminated #10.

Why I gave up beef


or “peer reviewed article changed behaviour”, or “this one crazy trick will reduce your land use by 66%!”.

Land use of animal calories

I had known for a while that eating meat was resource intensive, and beef was particularly bad (mostly from MacKay’s Sustainable Energy Without the Hot Air), and over the last couple of years I have been trying to reduce my meat intake. Sometimes I claimed I was “mostly vegetarian” (with some success, a friend had known me for a few weeks before realising, as I chewed my bacon one lunchtime, that I wasn’t vegetarian). Recently a friend gave up beef, and as I said at the time it was probably a better environmental commitment than my “mostly vegetarian”. But it wasn’t until I saw the numbers crunched in Eshel et al 2014 that I decided to eliminate beef.

It is no longer reasonable to entertain doubt as to the environmental impact of beef.

The graph I show above is for land use per megacalorie. I had to redraw it from Eshel et al 2014 figure 2 because… Well, why don’t you see:


In order to fit each graph into its tiny rectangle the long bars have been truncated and the extra long bit that has been removed has been replaced with a tiny number giving the coordinate that should have been plotted. For beef land use it’s 147 m²·yr (compared to poultry which is 4). So where the graph should show a spike that’s 40 times bigger, instead it shows one that’s 4 or 5 times. And this is their headline figure. The whole point of the article is to show how much more resource intensive beef is. Are the PNAS page charges really so high that they have to cram all the graphs into one corner?

I’ve just shown the land use figure, but Eshel et al 2014 have analyses for water, greenhouse gasses, and reactive nitrogen. Tiny little arrows give numbers for potato, wheat, and rice (which are on the whole a lot smaller, except the rice’s use of water). You can explore the Supplementary Information too, including the spreadsheet they used.

Obviously peer review is not perfect (it is merely evidence that a couple of reviewers ran out of reasons to delay its publication), and there are caveats. This studies only US beef. What about Europe? What about ostriches? What about food miles? But I think you would be foolish to think that these other matters would affect the central conclusion: eating beef uses a lot of resources.


Get every new post delivered to your Inbox.

Join 27 other followers