Skip to content

Posts tagged ‘science’

Ideal Conditions

Earlier this month it was announced that the sport scientist turned diet guru Tim Noakes is in talks with Derek Carstens, former First Rand executive and now Karoo farmer, about improving the diets of farm workers. The Cape Times reported:

Once the project begins, the families on the farm will be monitored for five to 10 years. With a diet high in offal – which is readily available in the farmlands of the Karoo – the families will stop consuming carbohydrates, which Noakes says are of no benefit to the human body.

‘This is an ideal set-up,’ said Noakes. ‘And it would be much harder to do research of this nature in a place like Cape Town.’

Since the emergence of nutrition as a field of scientific enquiry in the early twentieth century, the poor, the hungry, and the socially and politically disenfranchised have often been the subjects of research into diet and malnutrition. Last year, University of Guelph-based food historian Ian Mosby published evidence that during the 1940s and 1950s, scientists working for the Canadian government conducted a series of experiments on malnourished residents of rural Aboriginal communities and residential schools.

Rural impoverishment in the 1930s – brought about by the decline in the fur trade and cuts to government provision of poor relief – meant that First Nations people struggled to find enough to eat. They could not, in other words, afford to eat, and this knowledge informed the advice they provided to researchers for eradicating malnutrition. Mosby writes:

Representatives of the various First Nations visited by the research team proposed a number of practical suggestions for ending the hunger and malnutrition in their communities. In addition to more generous relief during times of extreme hardship, these included increased rations for the old and destitute, timber reserves to be set aside for the building and repairing of houses, and additional fur conservation efforts by the federal government, as well as a request that they be given fishing reserves ‘so that they could get fish both for themselves and for dog feed, free from competition with the large commercial fisheries.’

However, researchers decided to set up an experiment in which First Nations peoples were provided with vitamin supplements to gauge their relative effectiveness in combating the side effects of hunger. Crucially, researchers were well aware that ‘vitamin deficiencies constituted just one among many nutritional problems.’ In fact, they calculated that the average diet in these communities provided only 1,470 calories per person during much of the year.’ First Nations people needed food supplies, not vitamin supplements. Mosby concludes:

The experiment therefore seems to have been driven, at least in part, by the nutrition experts’ desire to test their theories on a ready-made ‘laboratory’ populated with already malnourished human ‘experimental subjects.’

In other areas, researchers regulated what kinds of food Aboriginals could purchase with their welfare grants (the Family Allowance):

These included canned tomatoes (or grapefruit juice), rolled oats, Pablum [baby food], pork luncheon meat (such as Spork, Klick, or Prem), dried prunes or apricots, and cheese or canned butter.

This experiment was also an attempt to persuade First Nations people to choose ‘country’ over ‘store’ foods. They were to hunt and to gather instead of relying on shops. To these ends, some officials tried to prevent some families from buying flour:

In Great Whale River, the consequence of this policy during late 1949 and early 1950 was that many Inuit families were forced to go on their annual winter hunt with insufficient flour to last for the entire season. Within a few months, some went hungry and were forced to resort to eating their sled dogs and boiled seal skin.

Perhaps unsurprisingly, there is little or no evidence to suggest that the subjects of these research projects consented to being part of them.

In South Africa, anxiety about the productivity of mine workers in the 1930s drove the publication of a series of reports into the health of the African population. Diana Wylie explains:

The Chamber of Mines in particular was alarmed at the 19 per cent rejection rate for Transkei mine recruits. Some of the researchers urged the government to concern itself with nutritional diseases ‘as an economic problem of first importance in which not merely the health but the financial interests of the dominant races are concerned.’ Another warned, ‘unless a proper food supply is assured, our biggest asset in the Union, next to the gold itself, our labour supply, will fail us in the years to come.’

In response to these findings, mining companies introduced supplements to miners’ diets to combat scurvy and generally boost immune systems. They did not, obviously, address the causes of miners’ ill health and poor diets – which were partly the impoverishment of rural areas and the system of migrant labour.

Mine workers in Kimberley.

Mine workers in Kimberley. (From here.)

The Canadian experiments and South African research projects were produced by a similar set of concerns: by an interest in civilising indigenous people, but also because, in the case of Canada, ‘it [was their] belief that the Indian [sic] can become an economic asset to the nation.’ Africans also needed to be well fed and kept healthy for the benefit of the South African state.

Noakes is correct when he says that conducting the research he proposes to do on rural farm workers would be almost impossible in a city. Although he insists that he will seek ethics approval, I wonder how he and other researchers will go about winning the informed consent of a group of people who are dependent on their employer – Noakes’s collaborator – for their livelihoods, and who have, historically, very low levels of education.

Also, Noakes seems to believe that only carbohydrates are at the root of farm labourers’ poor diets. As the First Nations people referred to above argued, malnutrition is caused by an inability to access good, nutritious food – and usually because of low wages. Instead of feeding Carstens’s employees offal, it might be worth considering how much they are paid, and how easy it is for them to afford transport to shops selling healthy food.

Noakes argues that ‘We can’t build this nation in the absence of sufficient protein and fat.’ To what extent is this project purely for the benefit of Karoo farm workers? And to what extent to prove a controversial theory proposed by a prominent researcher?

Sources

Ian Mosby, ‘Administering Colonial Science: Nutrition Research and Human Biomedical Experimentation in Aboriginal Communities and Residential Schools, 1942–1952,’ Histoire Sociale/Social History, vol. 46, no. 91 (May 2013), pp. 145-172.

Diana Wylie, ‘The Changing Face of Hunger in Southern African History, 1880-1980,’ Past and Present, no. 122 (Feb. 1989), pp. 159-199.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Green Revolutions

Recently, there’s been a lot of debate generated by a study done by a research team at the University of Caen in France. Last month, they published a paper in the peer-reviewed journal Food and Chemical Toxicology, in which they alleged that rats fed Monsanto’s genetically modified maize and exposed to the herbicide Roundup – also produced by Monsanto – over the course of a lifetime, developed tumours and suffered multiple organ damage.

Terrible photographs of some alarmingly lumpy rats circulated around the internet, and it seemed that the green movement’s vociferous opposition to GM crops was vindicated. But almost as soon as the study’s findings were announced, doubts – around the validity of the research itself and the way it had been communicated – began to emerge.

Not only have similar, more rigorous tests, demonstrated that GM crops had no impact on health, but, as the New Scientist reported:

the strain of rat the French team used gets breast tumours easily, especially when given unlimited food, or maize contaminated by a common fungus that causes hormone imbalance, or just allowed to age.

Moreover:

Five of the 20 control rats – 25 per cent – got tumours and died, while 60 per cent in ‘some test groups’ that ate GM maize died. Some other test groups, however, were healthier than the controls.

…the team claims to see the same toxic effects both with actual Roundup, and with the GM maize – whether or not the maize contained any actual herbicide. It is hard to imagine any way in which a herbicide could have identical toxic effects to a gene tweak that gives the maize a gene for an enzyme that actually destroys the herbicide.

This research isn’t entirely without value: it could suggest that even the smallest dose of weed killer or GM maize has the potential to cause physiological harm.

But even this conclusion is undermined by the circumstances in which the study was produced. The research team at Caen is open about its opposition to GM crops; and the anti-GM organisation which orchestrated the publicity around the release of the report, refused to allow journalists to consult other scientists about the paper.

As we’re right to be suspicious of studies undertaken by scientists affiliated to industry – the implications of which Ben Goldacre explores in his latest book on Big Pharma – so we must question the motives, however noble they may be, of this research team funded by anti-GM groups.

What I found so interesting about the response to the study was the vehemence of the anti-GM crop lobby. Like the debates around nuclear energy and, even, animal testing, it seems to me that the strength of feeling – on both sides – has a tendency to shut down all reasonable discussion. I was appalled when, earlier this year, a group of anti-GM activists threatened to destroy a field of GM wheat planted by scientists at the publicly-funded Rothamsted Research. Their work aimed partly to reduce pesticides sprayed on crops.

On the other hand, though, pro-GM scientists, economists, and others seem to be too quick to label those with – legitimate – concerns about the genetic modification of plants and animals as ‘anti-science.’ In an article from 2000, Norman Borlaug argued:

Extremists in the environmental movement, largely from rich nations and/or the privileged strata of society in poor nations, seem to be doing everything they can to stop scientific progress in its tracks. It is sad that some scientists, many of whom should or do know better, have also jumped on the extremist environmental bandwagon in search of research funds. …

We all owe a debt of gratitude to the environmental movement that has taken place over the past 40 years. This movement has led to legislation to improve air and water quality, protect wildlife, control the disposal of toxic wastes, protect the soils, and reduce the loss of biodiversity. It is ironic, therefore, that the platform of the antibiotechnology extremists, if it were to be adopted, would have grievous consequences for both the environment and humanity.

His point is that GM crops have the potential to end world hunger. As the Nobel Peace Prize winner credited with originating the Green Revolution during the 1950s and 1960s, Borlaug was in a position to argue– with some validity – that selective plant breeding had helped to feed a world of, now, seven billion people.

In 1943, concerned about the link between food shortages and political upheaval – particularly as the Cold War loomed – the Rockefeller Foundation began sponsoring research into the development of new drought-resistant and higher yielding plant species in Mexico.

Focussing on wheat, maize, and rice, Borlaug and other scientists affiliated with the programme cross-bred higher-yielding species. These new seeds were distributed at first in Mexico, India, and the Philippines. It’s difficult to underestimate the impact of this research, as Gordon Conway explains:

Cereal yields, total cereal production and total food production in the developing countries all more than doubled between 1960 and 1985. Over the same period their population grew by about 75 per cent. As a result, the average daily calorie supply in the developing countries increased by a quarter, from under 2,000 calories per person in the early 1960s to about 2,500 in the mid-80s, of which 1,500 was provided by cereals.

The Green Revolution has made it possible to feed a population of seven billion people. But it had substantial drawbacks. Conway writes that the ‘potential’ of the Green Revolution crops

could only be realised if they were supplied with high quantities of fertiliser and provided with optimal supplies of water. As was soon apparent, the new varieties yielded better than the traditional at any level of fertiliser application, although without fertiliser they sometimes did worse on poor soils. Not surprisingly, average rates of application of nitrogen fertilisers, mostly ammonium sulphate and urea, doubled and redoubled over a very short period.

We know now that we need a new Green Revolution – one which is not as heavily reliant on water, and which does not poison and destroy ecosystems. There’s a certain logic, then, to many activists’ arguments that it’s ‘science’ which is to blame for present food insecurity: that a return to small-scale peasant farming offers the best means of supplying food to an ever-growing population.

This suspicion of ‘science’ – whatever we may mean by this – is nothing new. During the 1970s, for instance, the green movement emerged partly in response to concerns about the implications of the Green Revolution for human health, biodiversity, and water supplies. Much of this early environmentalism advocated a return to nature, and a rejection of technology.

I haven’t made up my mind about the usefulness or otherwise of GM crops, but I hesitate over the whole-hearted embrace of ‘traditional’ methods of farming. It’s worth remembering that pre-industrial agriculture required the majority of the world’s population to be involved in food production in order to stave off hunger. Now, in developed nations, this number has plummeted to only a couple of per cent. In sub-Saharan Africa, seventy per cent of the population remains in engaged in agriculture, although this is also likely to decline.

Better technology and higher-yielding plant varieties have freed up the majority of the world’s population to do other forms of work. The world has changed a great deal since the eighteenth century.

What concerns me more, though, are the businesses which push GM crops – those which are at the receiving end of European and African bans on the planting of genetically modified wheat, maize, and other plants. Monsanto and Cargill are currently the target of a campaign to end the patenting of seeds – making them cheaper and more freely available to small farmers in the developing world.

These two companies, in particular, have a growing control over the world’s food supply. Not only do they own seed patents, but they provide pesticides and fertilisers. Cargill produces meat and grows grain – in fact, no one knows how much grain it has stored in its silos. Given that Cargill and the commodities trader Glencore have both admitted that their profits have increased as a result of the drought in the US and the resultant rise in food prices around the world, it’s exceptionally worrying that these organisations have so much control over our food chain.

What the GM debate reveals is a set of complex and shifting attitudes around the relationship between food, farming, and science – and around how we define what is ‘natural’. Instead of rejecting the potential benefits of GM crops out of hand, I think it would be wise to encourage more research into their implications both for human health, and for the environment. Moreover, I think we need to scrutinise and hold to account big businesses like Monsanto, Glencore, and Cargill. They represent a far greater threat to our ability to feed ourselves.

Further Reading

Norman Borlaug, ‘Ending World Hunger: The Promise of Biotechnology and the Threat of Antiscience Zealotry,’ Plant Physiology, vol. 124 (Oct. 2000), pp. 487-490.

Gordon Conway, The Doubly Green Revolution (London: Penguin, 1997).

Joseph Cotter, Troubled Harvest: Agronomy and Revolution in Mexico, 1880-2002 (New York: Praeger, 2003).

John H. Perkins, Geopolitics and the Green Revolution: Wheat, Genes, and the Cold War (New York: Oxford University Press, 1997).

Himmat Singh, Green Revolutions Reconsidered: The Rural World of Contemporary Punjab (New Delhi: Oxford University Press, 2001).

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

A Sporting Chance

My expectations of the London Olympics’ opening ceremony were so low that, I suppose, I would have been impressed if it had featured Boris as Boudicca, driving a chariot over the prostate figures of the Locog committee. (Actually, now that I think about it, that would have been fairly entertaining.)

Appalled by the organising committee’s slavishly sycophantic attitude towards its sponsors and their ‘rights’ – which caused them to ban home knitted cushions from being distributed to the Olympic athletes, and to require shops and restaurants to remove Olympic-themed decorations and products – as well the rule that online articles and blog posts may not link to the official 2012 site if they’re critical of the games, the decision to make the official entrance of the Olympic site a shopping mall, and the creation of special lanes for VIP traffic, I wasn’t terribly impressed by the London Olympics.

But watching the opening ceremony last night, I was reduced to a pile of NHS-adoring, Tim Berners-Lee worshipping, British children’s literature-loving goo. Although a reference to the British Empire – other than the arrival of the Windrush – would have been nice, I think that Danny Boyle’s narrative of British history which emphasised the nation’s industrial heritage, its protest and trade union movements, and its pop culture, was fantastic.

As some commentators have noted, this was the opposite of the kind of kings-and-queens-and-great-men history curriculum which Michael Gove wishes schools would teach. Oh and the parachuting Queen and Daniel Craig were pretty damn amazing too.

There was even a fleeting, joking reference to the dire quality of British food during the third part of the ceremony. There was something both apt, but also deeply ironic about this. On the one hand, there has been extensive coverage of Locog’s ludicrous decision to allow manufacturers of junk food – Coke, Cadbury’s, McDonald’s – not only to be official sponsors of a sporting event, but to provide much of the catering. (McDonald’s even tried to ban other suppliers from selling chips on the Olympic site.)

But, on the other, Britain’s food scene has never been in better shape. It has excellent restaurants – and not only at the top end of the scale – and thriving and wonderful farmers’ markets and street food.

It’s this which makes the decision not to open up the catering of the event to London’s food trucks, restaurants, and caterers so tragic. It is true that meals for the athletes and officials staying in the Village have been locally sourced and made from ethically-produced ingredients, and this is really great. But why the rules and regulations which actually make it more difficult for fans and spectators to buy – or bring their own – healthy food?

Of course, the athletes themselves will all be eating carefully calibrated, optimally nutritious food. There’s been a lot of coverage of the difficulties of catering for so many people who eat such a variety of different things. The idea that athletes’ performance is enhanced by what they consume – supplements, food, and drugs (unfortunately) – has become commonplace.

Even my local gym’s café – an outpost of the Kauai health food chain – serves meals which are, apparently, suited for physically active people. I’ve never tried them, partly because the thought of me as an athlete is so utterly nuts. (I’m an enthusiastic, yet deeply appalling, swimmer.)

The notion that food and performance are linked in some way, has a long pedigree. In Ancient Greece, where diets were largely vegetarian, but supplemented occasionally with (usually goat) meat, evidence suggests that athletes at the early Olympics consumed more meat than usual to improve their performance. Ann C. Grandjean explains:

Perhaps the best accounts of athletic diet to survive from antiquity, however, relate to Milo of Croton, a wrestler whose feats of strength became legendary. He was an outstanding figure in the history of Greek athletics and won the wrestling event at five successive Olympics from 532 to 516 B.C. According to Athenaeus and Pausanius, his diet was 9 kg (20 pounds) of meat, 9 kg (20 pounds) of bread and 8.5 L (18 pints) of wine a day. The validity of these reports from antiquity, however, must be suspect. Although Milo was clearly a powerful, large man who possessed a prodigious appetite, basic estimations reveal that if he trained on such a volume of food, Milo would have consumed approximately 57,000 kcal (238,500 kJ) per day.

Eating more protein – although perhaps not quite as much as reported by Milo of Croton’s fans – helps to build muscle, and would have given athletes an advantage over other, leaner competitors.

Another ancient dietary supplement seems to have been alcohol. Trainers provided their athletes with alcoholic drinks before and after training – in much the same way that contemporary athletes may consume sports drinks. But some, more recent sportsmen seem to have gone a little overboard, as Grandjean notes:

as recently as the 1908 Olympics, marathon runners drank cognac to enhance performance, and at least one German 100-km walker reportedly consumed 22 glasses of beer and half a bottle of wine during competition.

Drunken, German walker: I salute you and your ability to walk in a straight line after that much beer.

The London Olympic Village is, though, dry. Even its pub only serves soft drinks. With the coming of the modern games – which coincided with the development of sport and exercise science in the early twentieth century – diets became the subject of scientific enquiry. The professionalization of sport – with athletes more reliant on doing well in order to make a living – only served to increase the significance of this research.

One of the first studies on the link between nutrition and the performance of Olympic athletes was conducted at the 1952 games in Helsinki. The scientist E. Jokl (about whom I know nothing – any help gratefully received) demonstrated that those athletes who consumed fewer carbohydrates tended to do worse than those who ate more. Grandjean comments:

His findings may have been the genesis of the oft-repeated statement that the only nutritional difference between athletes and nonathletes is the need for increased energy intake. Current knowledge of sports nutrition, however, would indicate a more complex relationship.

As research into athletes’ diets has progressed, so fashions for particular supplements and foods have emerged over the course of the twentieth century. Increasing consumption of protein and carbohydrates has become a common way of improving performance. Whereas during the 1950s and 1960s, athletes simply ate more meat, milk, bread, and pasta, since the 1970s, a growing selection of supplements has allowed sportsmen and –women to add more carefully calibrated and targeted forms of protein and carbohydrates to their diets.

Similarly, vitamin supplements have been part of athletes’ diets since the 1930s. Evidence from athletes competing at the 1972 games in Munich demonstrated widespread use of multivitamins, although now, participants tend to choose more carefully those vitamins which produce specific outcomes.

But this history of shifting ideas around athletes’ diets cannot be understood separately from the altogether more shadowy history of doping – of using illicit means of improving one’s performance. Even the ancient Greeks and Romans used stimulants – ranging from dried figs to animal testes – to suppress fatigue and boost performance.

More recently, some of the first examples of doping during the nineteenth century come from cycling (nice to see that some things don’t change), and, more specifically, from long-distance, week-long bicycle races which depended on cyclists’ reserves of strength and stamina. Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen explain:

A variety of performance enhancing mixtures were tried; there are reports of the French using mixtures with caffeine bases, the Belgians using sugar cubes dripped in ether, and others using alcohol-containing cordials, while the sprinters specialised in the use of nitroglycerine. As the race progressed, the athletes increased the amounts of strychnine and cocaine added to their caffeine mixtures. It is perhaps unsurprising that the first doping fatality occurred during such an event, when Arthur Linton, an English cyclist who is alleged to have overdosed on ‘tri-methyl’ (thought to be a compound containing either caffeine or ether), died in 1886 during a 600 km race between Bordeaux and Paris.

Before the introduction of doping regulations, the use of performance enhancing drugs was rife at the modern Olympics:

In 1904, Thomas Hicks, winner of the marathon, took strychnine and brandy several times during the race. At the Los Angeles Olympic Games in 1932, Japanese swimmers were said to be ‘pumped full of oxygen’. Anabolic steroids were referred to by the then editor of Track and Field News in 1969 as the ‘breakfast of champions’.

But regulation – the first anti-drugs tests were undertaken at the 1968 Mexico games – didn’t stop athletes from doping – the practice simply went underground. The USSR and East Germany allowed their representatives to take performance enhancing drugs, and an investigation undertaken after Ben Johnson was disqualified for doping at the Seoul games revealed that at least half of the athletes who competed at the 1988 Olympics had taken anabolic steroids. In 1996, some athletes called the summer Olympics in Atlanta the ‘Growth Hormone Games’ and the 2000 Olympics were dubbed the ‘Dirty Games’ after the disqualification of Marion Jones for doping.

At the heart of the issue of doping and the use of supplements, is distinguishing between legitimate and illegitimate means of enhancing performance. The idea that taking drugs to make athletes run, swim, or cycle faster, or jump further and higher, is unfair, is a relatively recent one. It’s worth noting that the World Anti-Doping Agency, which is responsible for establishing and maintaining standards for anti-doping work, was formed only in 1999.

What makes anabolic steroids different from consuming high doses of protein, amino acids, or vitamins? Why, indeed, was Caster Semenya deemed to have an unfair advantage at the 2009 IAAF World Championships, but the blade-running Oscar Pistorius is not?

I’m really pleased that both Semenya and Pistorius are participating in the 2012 games – I’m immensely proud that Semenya carried South Africa’s flag into the Olympic stadium – but their experiences, as well as the closely intertwined histories of food supplements and doping in sport, demonstrate that the idea of an ‘unfair advantage’ is a fairly nebulous one.

Further Reading

Elizabeth A. Applegate and Louis E. Grivetti, ‘Search for the Competitive Edge: A History of Dietary Fads and Supplements,’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 869S-873S.

Ann C. Grandjean, ‘Diets of Elite Athletes: Has the Discipline of Sports Nutrition Made an Impact?’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 874S-877S.

Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen, ‘The History of Doping and Growth Hormone Abuse in Sport,’ Growth Hormone & IGF Research, vol. 19 (2009), pp. 320-326.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Quackers

Patient and loyal readers – immense apologies for the absence of this week’s blog post. I have just emerged from this semester’s marking hell, so normal service will resume this weekend. (For colleagues currently contemplating the point of their existence and wondering why they didn’t become professional tennis players, I give you this, and this. They helped immeasurably.)

This is, then, just a short post to point you in the direction of a recent ruling by South Africa’s Advertising Standards Authority. Last year, the respected NGO Equal Education laid an official complaint with the ASA about radio advertisements for a nutritional supplement called Smart Kids Brain Boost developed by quack nutritionist Patrick Holford. In the ads, Holford claimed that the product would improve ‘mental vitality’ (whatever that is) and better children’s performance at school.

As a submission from Harris Steinman demonstrates – in exhaustive detail – Holford’s claims are based on a clutch of peer-reviewed articles (good) whose research is outdated (not good) and which occasionally contradict him (really bad). This is not the first time that the ASA has ruled against Holford – an earlier complaint lodged by Steinman (who’s a real doctor) against the Mood Food nutritional supplement was upheld. Steinman proved that it was unlikely that Holford’s pills would make people feel happier or more motivated.

This most recent decision by the ASA pleases me enormously. Not only does it strike a blow against the nutrition industry which peddles the misinformation that all people need to take supplements in order to be healthy and happy, but it prevents a very wealthy man from benefitting from parents’ credulity. South Africa’s education system is dysfunctional, and it is likely that pupils’ poor performance is linked partly to bad diets. But these diets will not be improved by taking magic tablets. Only by alleviating poverty, and ensuring that parents are able to afford to buy the fruit, vegetables, protein, and carbohydrates that constitute healthy diets, will children’s performance at school improve.

This post owes a great deal to the excellent work done by the magnificently-named Quackdown! Do check it out.

Brave New Food

The TV series which I most want to watch at the moment is Portlandia. Set in Portland, Oregon, it satirises and celebrates the city which originated the ur-hipster. It includes a scene in a restaurant – which I’ve watched only on youtube, alas – in which a couple questions their waitress about the provenance of the chicken on the menu. Assured that it’s free range, local, and organic – partly because their waitress provides them with its papers and name – they leave the restaurant to have a look at it:

This is hilarious because it so closely mimics reality: the menus which list the provenance of all the produce used in the restaurant; the farmers’ market stalls with photographs of happy animals pre-slaughter; the recipes which insist upon free-range, organic ingredients.

I laugh, but I’m as implicated in this hyper-sensitivity about where my food comes from, and how it was treated before it arrived on my plate. I don’t want to eat animals that suffered so that I can continue being an omnivore. I eat relatively little meat and am prepared to pay for free-range chicken, pork, and beef. (I’m not terribly fussed about it being ‘organic’ – whatever we may mean by that.)

It is a scandal how animals are treated in factory farms, and increasing demand for red meat is environmentally unsustainable. So how should we eat meat, without causing harm? If vegetarianism is as implicated in the meat economy – veal is a by-product of the dairy industry, for example – and veganism seems far too difficult, then one way out of this impasse is to consider synthetic alternatives.

I’ve been amused by the overwhelming response to reports about the apparent viability of lab-grown meat. ‘Eeew’ and ‘yuk’ seem to sum up how people feel about it. But lab-grown meat is only the most recent panacea to the world’s crisis produced by scientists – and our views on it say a great deal about our changing feelings about the relationship between food and technology.

The meat in question is being grown by Dr Mark Post at Maastricht University. He’s being funded by an anonymous donor who’s concerned about the greenhouse gas emissions produced by cattle farming. Using stem cells from cows, Post’s team have grown sheets of muscle between pieces of Velcro, which are shocked with an electric current to develop their texture and density:

Post said he could theoretically increase the number of burgers made from a single cow from 100 to 100m. ‘That means we could reduce the number of livestock we use by 1m,’ he said.

Meat grown in the laboratory could have several advantages, because its manufacture is controlled at each step. The tissue could be grown to produce high levels of healthy polyunsaturated fatty acids, or to have a particular texture.

He believes it will be a relatively simple matter to scale up the operation, since most of the technical obstacles have already been overcome. ‘I’d estimate that we could see mass production in another 10 to 20 years,’ he said.

Post hopes to produce a burger by October.

When I read the earliest reports about Post’s work, I thought immediately of a scene in Margaret Atwood’s Oryx and Crake, where the protagonist visits a lab which grows chicken breasts out of stem cells. This is a dystopian novel which plays on our suspicion of food grown in laboratories. It seems strange, now, for us to consider synthetic, artificial, man-made food to be superior to all that is ‘fresh’, ‘natural’ and ‘authentic’. But this is a relatively new way of thinking about food.

During the 1950s, a decade when science seemed to offer the possibility of a cleaner, healthier, and better organised world, there was a brief, but intense enthusiasm for Chlorella pyrenoidosa, a high-protein algae which grew rapidly and abundantly and was fed by sunlight and carbon dioxide.

The post-war baby boom gave rise to anxieties in the 1950s that the world would be unable to feed its growing population. Of course, we now know that innovations in agriculture during this period – including the wholesale mechanisation of farming, the increased use of pesticides, hormones, and antibiotics, and breeding high-yielding livestock – and the Green Revolution of the 1960s and 1970s produced the crops and farming methods which, at enormous environmental cost, still feed seven billion of us. But at the time, politicians worried that hungry nations would create a politically unstable world.

Algae looked like a sensible solution to the problem. Easy and cheap to grow, and apparently highly nutritious, this seemed to be the Brave New World of food production. Warren Belasco writes:

The alluring news came from pilot projects sponsored by the Carnegie Institution and conducted by the Stanford Research Institute in Menlo Park and by Arthur D. Little, Inc. in Cambridge. Initial results suggested that chlorella algae was an astounding photosynthetic superstar. When grown in optimal conditions – sunny, warm, shallow ponds fed by simple carbon dioxide – chlorella converted upwards of 20 per cent of solar energy…into a plant containing 50 per cent protein when dried. Unlike most plants, chlorella’s protein was ‘complete’, for it had the ten amino acids then considered essential, and it was also packed with calories, fat, and vitamins.

In today’s terms, chlorella was a superfood. Scientists fell over themselves in excitement: Scientific American and Science reported on it in glowing terms; the Rockefeller Foundation funded research into it; and some calculated that a plantation the size of Rhode Island was would be able to supply half the world’s daily protein requirements.

In the context of a mid-century enthusiasm for all that was efficient, systematic, and man-made, algae’s appeal was immediate: it was entirely usable and produced little or no waste; its farming was not dependent on variable weather and rainfall; it was clean and could be transformed into something that was optimally nutritious.

So why didn’t I have a chlorella burrito for supper?

Unfortunately, chlorella didn’t live up to the hype. Not only did the production of grains and soybeans increase exponentially during the 1950s, meaning that farmers were loath to switch to a new and untested crop, but further research revealed that chlorella production would be more complicated and expensive than initially envisaged. Growing chlorella in the quantities needed to be financially viable required expensive equipment, and it proved to be susceptible to changes in temperature. Harvesting and drying it was even more of headache.

On top of this, chlorella tasted terrible. There were some hopes that the American food industry might be able to transform bitter green chlorella into an enticing foodstuff – in much the same way they used additives and preservatives to manufacture the range of processed foods which bedecked the groaning supermarket shelves of 1950s America. Edible chlorella was not a world away from primula cheese.

Those who were less impressed by the food industry suggested that chlorella could be used to fortify bread and pasta – or even transformed into animal feed. But research demonstrated that heating chlorella destroyed most of its nutrients. Even one of its supporters called it ‘a nasty little green vegetable.’ By the 1960s, it was obvious that at $1,000 a ton, and inedible, chlorella was not going to be the food of the future.

All was not lost for chlorella, though. It proved to be surprisingly popular in Japan, where it is still sold as a nutritional supplement. The West’s enthusiasm for algae also hasn’t dimmed:

The discovery in the 1960s of the blue-green algae spirulina in the Saharan Lake Chad and in Mexico’s Lake Texcoco gave another boost to the health food uses of algae. Spirulina has a high-nutrient profile similar to chlorella’s but without…production problems….

Ironically, the food that was supposed to feed the world is now the preserve of the wealthy, health-conscious middle classes – those who suffer most from the diseases of affluence – who can afford to buy small jars of powdered algae.

I hope that Post’s project manages to create a viable product which can be used to supplement people’s diets. I’m not particularly revolted by the idea of lab-grown meat, and if means that it reduces the numbers of factory farms, then that can only be a good thing.

What concerns me more are the potential motives of the businesses which would produce lab-grown meat. If it is taken up by the global food industry – which has patchy records on environmental sustainability and social responsibility – will we be able to trust them to provide us with meat which is healthy for us, and ethically produced?

Source

Warren Belasco, Meals to Come: A History of the Future of Food (Berkeley: University of California Press, 2006).

Creative Commons License Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Food Links, 08.02.2012

The World Food Programme spends £50 million on wheat from Glencore – a business which admits that it engages in food speculation, and despite the WFP’s commitment to buying its supplies from small farmers. But was Glencore the best option?

Mali faces a food crisis.

The future of food production – in Antarctica.

A new way for drug cartels to launder money: the fruit and vegetable trade.

An account of recent Egyptian history, from the point of view of Cairo’s Cafe Riche.

Commodities futures trading and market volatility – and the impact on food prices.

The link between political instability and food prices in Egypt.

Was the global food crisis really a crisis?

Early twentieth-century corsetry ads.

How to cook without a recipe.

The rise and rise of Belgian beer.

The strange appetites of Steve Jobs.

Jennifer Rubell’s food art.

Rethinking butter.

Iconic album covers recreated as pizzas.

The relative usefulness of poisonous food.

What do food writers eat when they write about food?

This is fantastic: They Draw and Cook is a collection of recipes illustrated by artists from around the world.

Last meals on death row.

The science of pickles.

An Ode to Pepper Vinegar.

Vegan foie gras. (Just non on so many levels.)

Pablo Neruda on soup.

So what exactly is Mexican street food?

Urban farming essentials.

Obesity soap.

Temptations of the Flesh

I’ve had an explosively sneezy cold this week, but with bed rest and pain killers to help me to sleep, I’m almost well again. (Unfortunately, my Head of Department remains unconvinced by my theory that I’ve been suffering from a bad allergy to undergraduate lecturing.) I really don’t see the point of taking anti-cold medication. It certainly won’t get rid of the bug, and the only time I’ve ever taken tablets for a cold – just before a long flight home from Paris – I hallucinated so badly that I thought it best never to repeat the experience. Taking it easy, avoiding dehydration, and being generally sensible seem to work every time. I’ve also had a range of advice about what I should eat: vitamin C supplements, garlic, zinc, lemon, and ginger. I’ve managed to consume nearly all of these over the past few days (although not at the same time), and – who knows? – maybe they made a difference.

We know that our diet influences our health. We know that the better we eat, the stronger our immune systems are and the longer we’ll live. It’s for this reason that many seem to believe that it’s possible to eat ourselves well: that we can both prevent and cure illnesses by eating some things, and avoiding others. I was struck forcibly by the strength of this thinking when I saw that Gwyneth Paltrow wrote a recipe book partly because she believed that her father’s eating habits caused the cancer which killed him. No, I am not completely mad, and, yes, I do realise that, at best, Paltrow can be described as a ray of ‘demented sunshine’, but this is an enormously popular and influential woman who really does think that had her father eaten more brown rice, he wouldn’t have had cancer – or, at least, wouldn’t have died from it.

There’s a logic to this thinking: if we eat pure, wholesome food, then, surely, we should be healthy and strong. The problem is that it’s difficult to define what is ‘pure’, ‘wholesome’, and ‘good’ food. However much nutritionists may dress up their work as ‘science’, we don’t know precisely what diet is best for our health. In the past few weeks new studies have demonstrated that drinking eight glasses of water and eating five portions of fruit and vegetables per day…will have very little effect on us at all. Oh, and vitamin supplements and probiotics are of dubious value too. It’s certain that we should eat plenty of fruit and vegetables and lessen our intake of red meat and saturated fat, but everything else remains guesswork. That study about Omega 3 supplements and children’s brains? It was nonsense. As is the advice sprouted by Patrick Holford. So, no, drinking green tea and eating mung beans and quinoa will not stave off cancer. (Sorry.) The amazing people at Information is Beautiful have provided a helpful visualisation of the relative benefits of dietary supplements (see here for a bigger and pleasingly animated version):


Our ideas around healthy diets have changed over time, and are inflected by a range of factors, including current debates in science and medicine, the interests of industry and food lobbies, and religious belief. In his magnificent study Flesh in the Age of Reason: How the Enlightenment Transformed the Way We See Our Bodies and Souls (2003), Roy Porter traces a shift in thinking about health and eating during the mid-eighteenth century. He argues that during the early modern period, stoutness and eating heartily – if not in excess – were seen as signs of good health. In Britain, a taste for roast beef was also connected to support for an incipient national ‘English’ consciousness.

But from the 1750s onwards, physical beauty was associated more frequently with slimness. (Compare, for example, portraits by Rubens and Constable.) Enlightenment bodies needed also to be fed in restrained, rational ways. One of the most popular prophets of the new eating orthodoxy was the physician George Cheyne (1673-1743) who based his views on plain, wholesome eating on his own experience of being morbidly obese. In The English Malady (1733) he argued that ‘corpulence produced derangements of the digestive and nervous systems which impaired not only health but mental stability. … Excess of the flesh bred infirmities of the mind.’ Porter explains:

Cheyne’s call to medical moderation was, however, also an expression of a mystical Christian Platonism trained at the emancipation of the spirit – he can thus be thought of as recasting traditional Christian bodily anxieties into physiological and medical idioms. For Cheyne, the flesh was indeed the spirit’s prison house. Excessive flesh encumbered the spirit; burning it off emancipated it.

Following the teachings of the German mystic Jakob Boehme, he imagined prelapsarian bodies innocently feeding on ‘Paradisiacal Fruits’. After the Fall, the flesh of the newly carnivorous humans had been subjected to the laws of the corruption of matter. …his works aimed at recovering the purity of the prelapsarian body.

Cheyne recommended a vegetarian diet on the grounds that it most closely resembled that eaten in the Garden of Eden. It was, in other words, the diet of spiritual perfection. Much of the success of his writing was due also to rise of a vegetarian movement in Europe during the eighteenth century. These Enlightenment vegetarians argued that it was cruel to slaughter animals merely for food, and also believed that ‘greens, milk, seeds and water would temper the appetite and produce a better disciplined individual.’

There has long been an association between corpulence and moral or spiritual laxity, and thinness with (self-) discipline. But what Cheyne advocated went further than this: he argued that rational individuals were partly responsible for their own ill-health because they could choose what they ate. Moreover, because he connected eating meat with sinfulness, deciding what to eat was also a moral choice.

Cheyne’s thinking proved to be remarkably durable. In the late nineteenth century, left-leaning social reformers promoted vegetarianism as the best example of ethical consumerism. Vegetarianism was healthy and it did not – they believed – cause the needless sacrifice of animals (although they didn’t address what happened to the bull calves and billy goats produced by lactating cows and nanny goats). In Sheila Rowbotham’s magnificent biography of the immensely influential socialist writer Edward Carpenter (1844-1929), she describes how Carpenter’s dictum of simple living took hold among the members of the Fellowship of the New Life, the forerunner of the Fabian Society. Carpenter agued for simple clothing, simple houses, and simple food:

Carpenter combined his evangelical call for a new lifestyle with an alternative moral economy. This recycled, self-sufficient praxis involved growing your own vegetables, keeping hens and using local not imported grain – American produce was forcing down British farmers’ prices.

But this met with some resistance. The physician and social reformer Havelock Ellis

protested against Carpenter’s advocacy of vegetarianism on the grounds that meat was a  ‘stimulant’. Ellis wanted to know why meat? Why not potatoes? Was not all food a stimulant?

I’m with Ellis on this one.

The food counterculture of the 1960s embraced vegetarianism and an enthusiasm for ‘whole foods’ as a manifestation of a way of living ethically and sustainably. Last week I discussed Melissa Coleman’s memoir of her childhood on her parents’ homestead in rural Maine during the early seventies. Her father, Eliot Coleman, is dubbed the father of the American organic movement, and he fed his growing family mainly from the garden he soon established. They supplemented their diet with bought-in grains, seeds, honey, nut butters, and oils, but were strictly vegetarian. Their role models, Helen and Scott Nearing, were highly critical of immoral ‘flesh eaters’. Their book, Living the Good Life (1954), which became the homesteading Bible, argued that it was possible to feed a family on produce grown organically. Again, the choice of what to eat was a moral one. Eliot and Sue Coleman believed that their diet guaranteed their good health:

Papa often quoted Scott’s sayings, ‘Health insurance is served with every meal.’ As Papa saw it, good food was the secret to longevity and well-being that would save him from the early death of his father. The healthily aging Nearings were living proof that a simple diet was the key.

But, as Melissa Coleman notes, this was not a diet that suited everyone. The family suffered from a lack of Vitamin B, and at times they simply didn’t eat enough. It also didn’t prevent Eliot from developing hyperthyroidism.

His heart seemed to beat too quickly in his chest, and he had a cold he couldn’t kick, despite gallons of rose-hip and raspberry juice. … He tried to make sense of things in his mind. Health insurance, he believed, was on the table at every meal. In other words, the best way to deal with illness was to invest in prevention – eating a good diet that kept the body healthy. … He’d read up on vitamins and minerals, learning which foods were highest in A, B, C, D, and minerals like calcium, magnesium, and zinc. He drank rose-hip juice for vitamin C, ate garlic and Echinacea to build immunity, used peppermint and lemon balm tea to soothe the stomach, and used chamomile to calm the nerves, but perhaps all this wasn’t enough.

She concludes: ‘He never thought to question the vegetarian diet espoused by the Nearings.’

I don’t – obviously – want to suggest that vegetarianism is deadly. Rather, my point is that the choices we make about our diets are influenced as much – or even more – by a set of assumptions about morality, our responsibility for our health, and other beliefs as they are by information about the nutritional benefits of food. I am concerned by two aspects of this belief that we are somehow able to eat ourselves better. We need to acknowledge that what we eat will not prevent us from falling ill. Sickness is caused by many things, and although important, diet is not an overriding factor.

Secondly, it mystifies what is actually very simple. Michael Pollan writes:

Eat food. Not too much. Mostly plants. That, more or less, is the short answer to the supposedly incredibly complicated and confusing question of what we humans should eat in order to be maximally healthy.

This won’t make terribly much money for nutritionists or the food industry, hence their interest in promoting things which, they suggest, will do miraculous things for our health. They almost certainly won’t. Unless you suffer from an ailment which needs to be treated with a special diet, deciding what to eat is not a complicated, mysterious process. No amount of goji berries will make you a healthier, happier, or better person.

Further Reading

Texts quoted here:

Melissa Coleman, This Life is In Your Hands: One Dream, Sixty Acres, and a Family Undone (New York: Harper, 2011).

Roy Porter, Flesh in the Age of Reason: How the Enlightenment Transformed the Way We See Our Bodies and Souls (London: Penguin [2003] 2004).

Other sources:

Warren Belasco, Meals to Come: A History of the Future of Food (Berkeley: University of California Press, 2006).

Philip Conford, The Origins of the Organic Movement (Edinburgh: Floris Books, 2001).

Harvey Levenstein, Paradox of Plenty: A Social History of Eating in Modern America, revised ed. (Berkeley: University of California Press, 2003).

Colin Spencer, The Heretic’s Feast: A History of Vegetarianism (Lebanon: University Press of New England, 1996).

Tristram Stuart, The Bloodless Revolution: Radical Vegetarians and the Discovery of India (London: Harper Press, 2006).

Creative Commons License Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Follow

Get every new post delivered to your Inbox.

Join 6,197 other followers