Skip to content

Posts from the ‘comment’ Category

The Root of the Evil

Over the past few weeks I’ve been watching with barely-disguised glee, the evisceration of a recent Newsweek article by Niall Ferguson – pet historian of the American right – in which he provides a deeply flawed analysis of Barack Obama’s past four years in power. As Matthew O’Brien notes, before systematically working through Ferguson’s argument (or, indeed, ‘argument’), ‘He simply gets things wrong, again and again and again.’

I’m no fan of Ferguson’s. This has less to do with our political differences – in relation to him, I’m so left-wing I should be living in a Himalayan hippy commune practising an obscure form of yoga while teaching Capital to peasants – but because of the way he shapes his interpretations of the past to suit a particular neoliberal agenda.

Of course, no historian is capable of writing an absolutely objective history of anything – nor would we want to because it would be dreadfully boring – but Ferguson presents, and defends, his arguments on the grounds that they are absolute truth.

He was called out on this last year by Pankaj Mishra, in a fantastic review of Civilisation: The West and the Rest for the London Review of Books. In Civilisation, Ferguson argues that

civilisation is best measured by the ability to make ‘sustained improvement in the material quality of life’, and in this the West has ‘patently enjoyed a real and sustained edge over the Rest for most of the previous 500 years’. Ferguson names six ‘killer apps’ – property rights, competition, science, medicine, the consumer society and the work ethic – as the operating software of Western civilisation that, beginning around 1500, enabled a few small polities at the western end of the Eurasian landmass ‘to dominate the rest of the world’.

Leaving aside the strange question of why an historian writing in the twenty-first century thinks that it’s possible to divorce the ‘West’ (whatever we may mean by that) from the rest of the world – and even why an historian feels like writing a triumphalist history of Europe and North America (I thought we stopped doing that in the sixties?) – this is a history which largely ignores, or plays down, the implications of modern capitalism and globalisation for those people outside of the West.

As in his writing on the creation of European empires, Ferguson has a problem with accounting for the widespread resistance of Africans, Asians, and others to European conquest – and the violence and exploitation which followed colonisation. Mishra writes:

he thinks that two vaguely worded sentences 15 pages apart in a long paean to the superiority of Western civilisation are sufficient reckoning with the extermination of ten million people in the Congo.

Recently I’ve been thinking a great deal about a comment which Roger Casement made in a report for the British government about atrocities committed in the Congo Free State during the late nineteenth century. Writing in 1900, he concluded:

The root of the evil lies in the fact that the government of the Congo is above all a commercial trust, that everything else is orientated towards commercial gain….

The Congo Free State came into being at the 1884-1885 Berlin West Africa Conference, where the assembled representatives of European states acknowledged the Belgian king’s right to establish a colony in central Africa. Leopold II’s International Association – a front organisation for his own commercial interests – was allowed to operate in the region.

There were strings attached to the deal – Leopold had to encourage both humanitarianism and free trade, for instance – but with the sharp increase in international demand for rubber in the 1890s, after JB Dunlop’s invention of inflatable rubber tyres, Leopold’s interest in the Congo, which had only ever extended to exploiting the country for its natural resources, narrowed even further. Leopold operated his own monopoly on the rubber trade, leasing some land to other companies on the proviso that they pay him a third of their profits.

The ‘evil’ to which Casement referred was the transformation of the Congolese population into a mass of forced labourers compelled to contribute quotas of rubber to the various businesses operating in the Free State. Those who failed to do so, those who refused to do so, or those who were suspected of not doing so, faced brutal reprisals from the State’s Force Publique, including being killed, often along with their families; having their hands cut off; and seeing their villages and property burned and destroyed.

It’s estimated that ten to thirteen million Congolese died as a result of murder, starvation, exhaustion, and disease between 1885 and 1908, when international condemnation of Leopold’s regime forced the Belgian government to take control of the Free State.

Although other colonial regimes in Africa could be brutal, violent, and unjust, none of them – with the possible exception of Germany in (what is now) Namibia – managed to commit atrocities on the scale that Leopold did in the Congo. As Casement makes the point, ‘the root of the problem’ was that the Congo was run entirely for profit, and that the businesses which operated in the region were not regulated in any way. This was capitalism at its most vicious.

But what does this all have to do with food? Well I was reminded of Casement’s comment when reading about Glencore’s response to the current droughts – chiefly in the US, but also elsewhere – which are partially responsible for global increases in food prices:

The head of Glencore’s food trading business has said the worst drought to hit the US since the 1930s will be ‘good for Glencore’ because it will lead to opportunities to exploit soaring prices.

Chris Mahoney, the trader’s director of agricultural products, who owns about £500m of Glencore shares, said the devastating US drought had created an opportunity for the company to make much more money.

‘In terms of the outlook for the balance of the year, the environment is a good one. High prices, lots of volatility, a lot of dislocation, tightness, a lot of arbitrage opportunities [the purchase and sale of an asset in order to profit from price differences in different markets],’ he said on a conference call.

This weekend, it was revealed that Barclays has made more than £500 million from food speculation:

The World Development Movement report estimates that Barclays made as much as £529m from its ‘food speculative activities’ in 2010 and 2011. Barclays made up to £340m from food speculation in 2010, as the prices of agricultural commodities such as corn, wheat and soya were rising. The following year, the bank made a smaller sum – of up to £189m – as prices fell, WDM said.

The revenues that Barclays and other banks make from trading in everything from wheat and corn to coffee and cocoa, are expected to increase this year, with prices once again on the rise. Corn prices have risen by 45 per cent since the start of June, with wheat jumping by 30 per cent.

What bothers me so much about these massive profits is partly the massive profits – the fact that these businesses are actually making money out of a food crisis – but mainly it’s that these monstrously wealthy businessmen are so unwilling to admit that what they’re doing is, even in the most charitable interpretation, morally dubious.

Barclays’s claim that its involvement in food speculation is simply a form of futures trading is disingenuous: futures trading is an entirely legitimate way for farmers to insure themselves against future bad harvests. What Barclays and other banks, as well as pension funds, do is to trade in agricultural commodities in the same way as they do other commodities – like oil or timber.

In 1991, Goldman Sachs came up with an investment product – the Goldman Sachs Commodity Index – which allowed for raw materials, including food, to be traded as easily as other products. When the US Commodities Futures Trading Commission deregulated futures markets eight years later, for the first time since the Great Depression, it became possible to trade in maize, wheat, rice, and other foodstuffs for profit.

The current food crisis has been caused by a range of factors – from the drought, to the excessive use of maize and other crops for biofuel – and exacerbated by climate change and pre-existing conflicts, corruption, inequalities, and problems with distribution. In Europe, unemployment and low wages will add to people’s inability to buy food – hence the rise in demand for food banks in Britain, for example.

Food speculation has not caused the crisis, but it does contribute to it by adding to food price volatility. I’m not – obviously – comparing Glencore or Barclays to Leopold II’s International Association, but the atrocities committed in the Congo Free State provide an excellent example of what happens when capitalism is allowed to run rampant. Let’s not make that mistake with our food supply.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Domestic Science

Lovely readers! This week’s post appears on the delightful blog The Flick. It’s about feminism and convenience food – do take a look.

Thirsty Knowledge

I’ve recently resuscitated my iTunes account, and I’ve been re-acquainting myself with the joys of the podcast. As a mad fan of Internet radio, having the most recent episodes of More or Less, the New York Times Book Review, the Guardian‘s Science Weekly, NPR’s Fresh Air, the Granta Podcast and, obviously, the Food Programme, arriving periodically is a glorious thing.

Relatively recently, I’ve become faintly obsessed with This American Life, and have relied on its extensive archive to keep me sane while writing lectures. I particularly enjoyed two, linked, episodes on Pennsylvania State University. The first, broadcast in December 2009, is an account of why Penn State has consistently been nominated as ‘America’s number one party school,’ and the second, from the end of last year, revisits the university’s reputation for heavy drinking in light of the recent scandal.

As you’d expect of This American Life, both episodes are thoughtful, intelligent accounts of life in State College, PA, where townsfolk have to put up with the antics of drunken students – from stealing traffic signs, to urinating in private gardens – and where the university’s various strategies for dealing with the campus’s drinking culture are impeded by a strong lobby from alumni and other donors.

A lot of what these episodes covered felt familiar. I grew up in a South African university town and now hold a fellowship at that university. The institution is based in the heart of the country’s wine-producing region, so alcohol is cheap and plentiful. As someone with a comically low tolerance of alcohol, I’ve never been a big drinker. I sailed through university as, usually, the only sober person at parties.

A while ago, I wrote a post about academia and the food at conferences, and one of the themes in the responses I received was that I needed to focus more on the booze. And that’s absolutely true: while we may be – justifiably – concerned about undergraduate binge drinking, there’s a stereotype that academics drink – in the same way that we dress badly, drive banged-up cars, and are chronically forgetful. As Malcolm Bradbury writes in The History Man (1975):

It has often been remarked, by Benita Pream, who services several such departmental meetings, that those in History are distinguished by their high rate of absenteeism, those in English by the amount of wine consumed afterwards, and those in Sociology by their contentiousness.

I think that many would suggest that Benita’s point about the wine could apply to all departmental meetings, regardless of the discipline involved.

Just about every decent campus novel contains at least one scene of drunken, academic embarrassment. Or, indeed, in Kingsley Amis’s Lucky Jim (1954), of success. Jim Dixon spends most of the novel either pursuing the pretty-but-dim Christine in a fairly desultory way, or trying – in post-war, still rationed Britain – to scrape together enough money to buy cigarettes and drink.  In the famous, final scene, he gets completely hammered, delivers a speech which should get him fired, but which, instead, gets him both the girl and his dream job.

My two favourite campus novels, The History Man and Michael Chabon’s Wonder Boys (1995) – yes the one that was turned into the surprisingly fun movie – both feature heroes whose academic careers are linked to the – occasionally excessive – consumption of alcohol and various banned substances. Both novels have parties at key turning-points in the narrative. In The History Man the suave socialist sociologist Howard Kirk and his long-suffering wife, Barbara, host parties at the beginning and end of the novel – places where students and lecturers at a red brick, radical university mingle, discussing contraception, Hegel, revolution, and, of course, religion:

No sooner are the first arrivals in the living-room, with drinks, talking breastfeeding, when more guests arrive. The room fills. There are students in quantities; bearded Jesus youths in combat-wear, wet-look plastic, loon-pants, flared jeans, Afghan yak; girls, in caftans and big boots, with plum-coloured mouths. There are young faculty, serious, solemn examiners of matrimony and its radical alternatives…. Howard goes about, a big two-litre bottle hanging on the loop from his finger, the impresario of the event, feeling the buoyant pleasure of having these young people round him…. He poured wine, seeing the bubbles move inside the glass of the bottle in the changing lights of his rooms.

Howard maintains – and gains – his position of power within his department and on his campus by wielding wine at important moments.

The appropriately named Grady Tripp in Wonder Boys uses grass and a range of other drugs – legal and illegal – to cope with the collapse of his marriage, his career, and his reputation as a writer. He holds a position at a small liberal arts university in Pittsburgh, but can’t finish his novel, is having an affair with the Chancellor, and has been (deservedly) deserted by his wife. Over the course of the university’s annual Wordfest weekend, his life falls apart. As in The History Man, parties take place at pivotal moments – one of them in Grady’s house. He returns to discover

writers in the kitchen, making conversation that whip-sawed wildly between comely falsehood and foul-smelling truths, flicking their cigarette ash into the mouths of beer cans. There were half a dozen more of them stretched out on the floor of the television room, arranged in a worshipful manner around a small grocery bag filled with ragweed marijuana, watching Ghidorah take apart Tokyo.

But most academic drinking is done more decorously: over dinner, and after conferences and workshops. Some of the Oxford and Cambridge colleges have legendarily well-stocked cellars. Just about every seminar I attended in London ended with a trip to the pub. There’s even a Radio 4 series called The Philosopher’s Arms, where Matthew Sweet and a collection of philosophers discuss ideas and issues in a real pub:

Welcome to the Philosopher’s Arms, the only boozer in Britain where, if you ask the landlady whether there’s a happy hour, she’ll remind you of the words of John Stuart Mill: ‘Ask yourself whether you are happy, and you’ll cease to be so.’

The appeal of the pub is that it allows for the usually fairly byzantine rules which govern academic life to relax a little. Anxious postgrads get to talk to well-known, senior researchers, gossip is exchanged, and friendships and alliances formed. One very grand historian who used to convene a weekly seminar I attended, was transformed from an incisive and ruthless eviscerator of poorly-constructed arguments, to a jovial old cove as he nursed his half-pint of real ale.

It’s also true that pubs and drinking can be used to exclude those who don’t drink, for whatever reason, or those who don’t feel welcome in pubs or bars. As AS Byatt points out in an interview with the Paris Review, up until the mid-1960s, university departments could prevent their female staff from contributing to important decisions by conducting meetings in pubs, then an almost exclusively male preserve.

But I don’t think that it’s any coincidence that pubs, in particular, feature so strongly in a lot of the mythology surrounding significant moments in academia: in the discovery of the double helix structure of DNA, and in the meetings of the Inklings – the most famous members of which were CS Lewis and JRR Tolkien – at the Eagle and Child in Oxford, for instance. Pubs – and other, similarly festive occasions involving drinking – provide academics with a chance to talk and to think beyond the usual strictures of academia and, in doing so, to arrive at new and surprising ideas.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

A Hungry World

One of the best parts of teaching a course on African history is being able to introduce students to Binyavanga Wainaina’s amazing essay ‘How to Write about Africa’. In my first lecture, I wanted to emphasise the disconnect between the (powerful) narratives which have been developed about the continent – by travellers, politicians, journalists – and its history, societies, politics, and economics. Wainaina’s achievement is that he draws attention to a range of usually unchallenged assumptions about Africa, and shows them to be ridiculous:

Never have a picture of a well-adjusted African on the cover of your book, or in it, unless that African has won the Nobel Prize. An AK-47, prominent ribs, naked breasts: use these. If you must include an African, make sure you get one in Masai or Zulu or Dogon dress.

In your text, treat Africa as if it were one country. It is hot and dusty with rolling grasslands and huge herds of animals and tall, thin people who are starving. Or it is hot and steamy with very short people who eat primates. Don’t get bogged down with precise descriptions. Africa is big: fifty-four countries, 900 million people who are too busy starving and dying and warring and emigrating to read your book. …

Taboo subjects: ordinary domestic scenes, love between Africans (unless a death is involved), references to African writers or intellectuals, mention of school-going children who are not suffering from yaws or Ebola fever or female genital mutilation.

Recently, there has been a lot of discussion, particularly in the United States, about how the western media covers Africa. Laura Seay writes in an excellent article for Foreign Policy:

Western reporting on Africa is often fraught with factual errors, incomplete analysis, and stereotyping that would not pass editorial muster in coverage of China, Pakistan, France, or Mexico. A journalist who printed blatantly offensive stereotypes about German politicians or violated ethical norms regarding protection of child-abuse victims in Ohio would at the least be sanctioned and might even lose his or her job. When it comes to Africa, however, these problems are tolerated and, in some cases, celebrated. A quick search of the Google News archives for ‘Congo’ and ‘heart of darkness’ yields nearly 4,000 hits, the vast majority of which are not works of literary criticism, but are instead used to exoticise the Democratic Republic of the Congo while conjuring up stereotypes of race and savagery. Could we imagine a serious publication ever using similar terminology to describe the south side of Chicago, Baltimore, or another predominately African-American city?

Similarly, Jina Moore makes the point in the Boston Review that believing that journalists should only report incidents of violence or suffering, instead of other aspects of life on the continent, is

a false choice. We can write about suffering and we can write about the many other things there are to say about Congo. With a little faith in our readers, we can even write about both things – extraordinary violence and ordinary life – in the same story.

These narratives – these stories, these reports and articles about Africa – have a measurable impact on the ways in which the rest of the world interacts with the continent. Tracing a shift in American attitudes towards Africa from around 2000, when concern about the AIDS epidemic was at its height, Kathryn Mathers writes:

Suddenly there were no conversations about new democracies in Africa, or investment opportunities; the potential consumers were represented as too sick to labour, let alone to shop. This became the burden of caring Americans whose consumption practices can give a sick child in Africa ARVs or provide mosquito nets against the ravages of malaria.

To coincide with the final day of the 2012 Olympics, David Cameron and the Brazilian vice-president Michel Temer will host a summit on hunger and malnutrition in the developing world. It will be attended by officials from the US Department of Agriculture and the UK Department of International Development, as well as a clutch of celebrities. As an editorial in the Guardian puts it, ‘when tackling malnutrition involves photo-opportunities with icons such as Mo Farah and David Beckham, it’s hard not to be sceptical’ about the impact that this summit will have.

Although the summit was planned months ago, its timing is particularly apt: the world is facing another food crisis. Since the end of July, it’s become clear that the bumper harvest predicted, globally, for 2012 was not to be – in fact, maize and wheat yields are down. This year’s soybean crop is the third worst since 1964. Reading about this crisis, you’d be forgiven for thinking that it is exclusively the problem of poor nations: we know that Zimbabwe, the Sahel region, the Horn of Africa, and Yemen all face severe food shortages, and that the price of food is increasing in Egypt, Mexico, South Africa, and other middle-income nations.

However, the immediate cause of this food crisis lies far away from the regions worst affected by malnutrition and high food prices: in the United States, which is currently experiencing its worst drought in almost a century. More than half the country’s counties – 1,584 in 32 states, including Iowa, Indiana, Oklahoma, and Wyoming – have been declared disaster areas.

It’s difficult to underestimate just how devastating this drought has been (and is):

Wherever you look, the heat, the drought, and the fires stagger the imagination.  Now, it’s Oklahoma at the heart of the American firestorm, with ‘18 straight days of 100-plus degree temperatures and persistent drought’ and so many fires in neighbouring states that extra help is unavailable. It’s the summer of heat across the U.S., where the first six months of the year have been the hottest on record…. More than 52% of the country is now experiencing some level of drought, and drought conditions are actually intensifying in the Midwest; 66% of the Illinois corn crop is in ‘poor’ or ‘very poor’ shape, with similarly devastating percentages across the rest of the Midwest.  The average is 48% across the corn belt, and for soybeans 37% – and it looks as if next year’s corn crop may be endangered as well. …according to the Department of Agriculture, ‘three-quarters of the nation’s cattle acreage is now inside a drought-stricken area, as is about two-thirds of the country’s hay acreage.’

There are suggestions that the Midwest is in danger of experiencing a second Dust Bowl. But the drought is not limited to the US: unusually dry summers have reduced harvests in Russia, the Ukraine, and Kazakhstan. And the effects of these poor yields will be felt around the world. Even if, as the Financial Times reports, the drought will push up prices of beef, pork, and chicken in the United States and Europe, the countries most at risk of food shortages, and, indeed, of social unrest, are those which rely on food imports to feed their populations.

If rates of malnutrition are to be reduced and food shortages, addressed, then politicians will have to consider them in global context. They will have to rethink America’s energy policies, which have allowed for almost forty per cent of the country’s corn crop to be devoted to ethanol production. They will have to address the impact that financial speculation has on the price of food commodities. A report published by the New England Complex Systems Institute suggests that food price increases are likely to be exacerbated by the unregulated trade in staples like maize and wheat.

Even these measures will not be enough to ensure adequate access to food for all people: we need to find strategies to slow down and mitigate the effects of climate change; social and economic inequality in the developing world must be addressed; land grabs need to be halted; and agricultural policies in sub-Saharan Africa and elsewhere need to favour small farmers.

In the same month in which the tofu industry in Indonesia has threatened to down tools over rising soybean prices, the cost of maize meal is increasing in Mexico, and there were protests in Iran over price of chicken, the grain trader Cargill announced revenues of $134 billion. This state of affairs is not sustainable.

While it’s certainly the case that famine and malnutrition in parts of sub-Saharan Africa are the products of dysfunctional and corrupt governments, it’s also true that as part of a globalised food system, food insecurity in Africa – and the rest of the developing world – is connected to a set of problems which can only be solved on an international scale.

This is, then, a global crisis. But reporting has tended to disassociate its cause and effects: hunger in Africa is reported separately from the drought in the northern hemisphere and the spike in food prices. Cameron’s summit on malnutrition focuses exclusively on the developing world. I think that this is partly as a result of the narratives which inform reporting on these regions: America is an agricultural superpower, while Africa is a site of terminal decline and disaster. It’s worth noting that America’s poor harvest tends to be reported on in the environmental or financial sections of newspapers and websites, while hunger and malnutrition in sub-Saharan Africa and south Asia are relegated to the sections dealing with aid or development. Linking malnutrition in South Sudan to the maize harvest in Indiana would upset these ways of thinking about Africa and the United States.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Free-From Food

Last week I visited the new health food shop in the shopping centre near my flat. I was in search of coconut flakes to add to granola – why yes, I do make my own granola (what else did you expect?) – but, instead, bought nearly my own body weight in almond meal, and came away, amazed by the incredible range of foodstuffs and supplements on sale. I was struck by how little the diet advocated by the makers of these food products tallied with my own idea of healthy eating. While I try to eat a little of everything, and always in moderation, both the health shop and its products seem to view most forms of food with profound suspicion.

In a recent edition of Radio 4’s Food Programme, Sheila Dillon charts the rise of the ‘free from’ food industry. As she makes the point, for all that these lactose-, gluten-, sugar-, and wheat-free snacks, bars, and drinks advertise themselves as the ‘healthy’ alternative, they are as heavily processed as ready meals in supermarkets. I think that one way of accounting for this odd paradox – that people who wouldn’t normally go anywhere near a box of supermarket lasagne are willing to buy heavily processed kale chips or carob bars – is to consider how ideas around what we define as ‘healthy’ food have changed.

When I was preparing lectures on food and the 1960s counterculture my father recommended a story in Tom’s Wolfe’s New Journalism (1975). Written unbelievably beautifully by Robert Christgau, now best known as a music journalist, the essay charts the slow decline of a young woman in the thrall of a fad diet. Titled ‘Beth Ann and Macrobioticism’ the piece begins in Greenwich Village in 1965.  Twenty-three year-old married couple Beth Ann and Charlie, were living as artists, and off money from Charlie’s father, in hippy New York. Discontented with the range of mind-expanding experiences offered to them by the collection of drugs and therapies they’d been taking, Charlie learned about the Zen macrobiotic diet from a friend.

Published in the United States in the mid-1960s, Zen Macrobiotics: The Art of Rejuvenation and Longevity by Georges Ohsawa, a Japanese philosopher and sometime medical doctor,

contends that all of the physical and spiritual diseases of modern man result from his consuming too much yin (basically, potassium…) or too much yang (sodium) – usually too much yin. … Most fruits (too yin) and all red meat (too yang) are shunned, as are chemicals (additives and drugs, almost all yin, as well as ‘unnatural’) and Western medicine. According to Ohsawa, the diet is not merely a sure means to perfect physical health. …it is also a path to spiritual health and enlightenment.

As Christgau makes the point, Ohsawa’s macrobiotic diet is ‘dangerously unsound’. It’s comprised of ten progressively restrictive stages, with the final including only water and brown rice. The American Medical Association denounced the diet on the grounds that those who followed Ohsawa’s directions religiously were at risk of scurvy, anaemia, malnutrition, and kidney failure.

Beth Ann and Charlie devoted themselves to macrobiotics with enthusiasm, quickly deciding on Diet no. 7, which consisted mainly of grain and tea. Unsurprisingly, they both lost weight quickly, and experienced a kind of hunger-induced euphoria:

They slept less than six hours a night. They…felt high on the diet, with spontaneous flashes that seemed purer and more enlightening than anything they had felt on drugs. … One joyous day, they threw out every useless palliative in the medicine cabinet and then transformed their empty refrigerator…into a piece of pop culture, with sea shells in the egg compartments and art supplies and various pieces of whimsy lining the shelves.

Shortly after this, both began to sicken. Beth Ann, in particular, displayed all the symptoms for scurvy. Despite a fellow macrobiotic enthusiast’s recommendation that she add raw vegetables to her diet, Beth Ann began to fast, for stretches of two weeks at a time. She wrote to Ohsawa, who told her to remain on the diet. Soon, she was bedridden, and moved in with her parents-in-law, who urged her to see a doctor. On the morning of her death – with a fever, and very weak – another letter arrived from Ohsawa, informing her that she had misunderstood the diet completely. But it was too late: she died a few hours later.

Beth Ann was not the only person taken in by Zen macrobiotics during the 1960s and 1970s. There were several cases of people who either died from, or were hospitalised for, malnutrition and salt poisoning as a result of a too-rigid adherence to the diet.

I don’t suggest for a moment that Cape Town’s health food hippies are in danger of starving themselves to death in an attempt to follow the teachings of a twentieth-century Japanese loon, but there are remarkable continuities between the 1960s enthusiasm for Zen macrobiotics and contemporary anxieties about food and nutrition.

On the extreme end of this scale of suspicion of food, are proponents of restricted-calorie diets who argue – with very little evidence – that those who eat less, will live significantly longer. Earlier this year, a Swiss woman starved herself to death after attempting to live only on sunshine. (Perhaps she thought she would photosynthesise?)

But on the other, more reasonable side, are the legions of women’s magazines which advise their readers what not to eat, rather than what they should be eating. These, and other publications, have variously branded sugar, saturated fat, and carbohydrates as the enemies of healthy diets, and, like Zen macrobiotics, advocate increasingly restricted diets. This advice is subject to change, though. For instance, a group of experts at the American Dietetic Association’s most recent Food and Nutrition Conference noted that there is no evidence to suggest that low-fat diets have any health benefits.

Where does this idea – that food is the source of ill-health, rather than the fuel which helps to keep sickness at bay – originate? There is a millennia-old tradition in Western and other cultures of associating deprivation with moral or spiritual superiority and purity.

But, more specifically, I think that this suspicion of food can be located during the eighteenth century. Indeed, contemporary mainstream macrobiotic diets are based on the writing of an Enlightenment German physician Christoph Wilhelm Hufeland (1762-1836), who is credited with coining the term ‘macrobiotics’. In The Art of Prolonging Human Life (1797), Hufeland argued that each person possesses a ‘life force’ which needs to be nurtured and protected through rest, exercise, and a carefully-calibrated diet.

Hufeland’s writing was part of a wider, Enlightenment questioning of what constituted a morally and physically healthy person. In his influential text The English Malady (1733), the Scottish physician George Cheyne (1671-1743) argued that corpulence and over-eating undermined both the health of the body as well as the mind. Roy Porter explains:

Cheyne’s books were extremely popular and many later medical thinkers echoed his calls to temperance, with added intensity. Moderation would overcome that classic Georgian disorder, the gout, proclaimed Dr William Cadogan. If the turn towards regulating the flesh was decidedly health-oriented, however, it also became part and parcel of a wider movement, expressive of preferred cultural ideals and personal identities.

The emergence of an ethical vegetarianism – vegetarianism by choice, rather than necessity – during this period was one of the best examples of this attempt to regulate excessive behaviour through moderate eating:

Joseph Ritson, for example, held that because dead meat itself was corrupt, it would stir violent passions, whereas greens, milk, seeds and water would temper the appetite and produce a better disciplined individual.

I think that there’s a continuum between this association of a restricted diet with being a better person, and contemporary notions of healthy eating. The Zen macrobiotic craze in the 1960s was an extreme example of this desire only to eat that which is ‘pure’ in order to be good – as is the relatively recent phenomenon of orthorexia:

Orthorexics commonly have rigid rules around eating. Refusing to touch sugar, salt, caffeine, alcohol, wheat, gluten, yeast, soya, corn and dairy foods is just the start of their diet restrictions. Any foods that have come into contact with pesticides, herbicides or contain artificial additives are also out.

To be clear, orthorexia does not refer to those people who are genuinely allergic to some kinds of food. Rather, it describes an obsession with eating healthily. Although this obsessiveness can be socially limiting, it’s also admired to some extent. Sticking rigidly to a needlessly restrictive, ‘free-from’ diet is seen, frequently, as a sign of self-control, and an even greater willingness to take full responsibility for maintaining one’s own health.

The emergence of orthorexia and even the growing popularity of free-from foods, are indicative of a wider belief that we should care more about what we don’t eat, rather than what we do – and that there’s a connection between eating ‘healthily’ (whatever we may mean by that), and being a good and virtuous person. In a time when it is ever-easier to eat cheap junk food, and when rates of obesity are soaring all over the world, surely, it makes better sense to emphasise the pleasures of good food – and not to suggest that the unhealthy or overweight are morally suspect?

Further Reading

Robert Christgau, ‘Beth Ann and Macrobioticism,’ in The New Journalism, ed. Tom Wolfe and EW Johnson (London: Picador, 1975), pp. 363-372.

Karlyn Crowley, ‘Gender on a Plate: The Calibration of Identity in American Macrobiotics,’ Gastronomica: The Journal of Food and Culture, vol. 2, no. 3 (Summer 2002), pp. 37- 48.

Roy Porter, Flesh in the Age of Reason: How the Enlightenment Transformed the Way We See Our Bodies and Souls (London: Penguin, 2003).

Victoria Rezash, ‘Can a Macrobiotic Diet Cure Cancer?’ Clinical Journal of Oncology Nursing, vol. 12, no. 5 (Oct. 2008), pp. 807-808.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

A Sporting Chance

My expectations of the London Olympics’ opening ceremony were so low that, I suppose, I would have been impressed if it had featured Boris as Boudicca, driving a chariot over the prostate figures of the Locog committee. (Actually, now that I think about it, that would have been fairly entertaining.)

Appalled by the organising committee’s slavishly sycophantic attitude towards its sponsors and their ‘rights’ – which caused them to ban home knitted cushions from being distributed to the Olympic athletes, and to require shops and restaurants to remove Olympic-themed decorations and products – as well the rule that online articles and blog posts may not link to the official 2012 site if they’re critical of the games, the decision to make the official entrance of the Olympic site a shopping mall, and the creation of special lanes for VIP traffic, I wasn’t terribly impressed by the London Olympics.

But watching the opening ceremony last night, I was reduced to a pile of NHS-adoring, Tim Berners-Lee worshipping, British children’s literature-loving goo. Although a reference to the British Empire – other than the arrival of the Windrush – would have been nice, I think that Danny Boyle’s narrative of British history which emphasised the nation’s industrial heritage, its protest and trade union movements, and its pop culture, was fantastic.

As some commentators have noted, this was the opposite of the kind of kings-and-queens-and-great-men history curriculum which Michael Gove wishes schools would teach. Oh and the parachuting Queen and Daniel Craig were pretty damn amazing too.

There was even a fleeting, joking reference to the dire quality of British food during the third part of the ceremony. There was something both apt, but also deeply ironic about this. On the one hand, there has been extensive coverage of Locog’s ludicrous decision to allow manufacturers of junk food – Coke, Cadbury’s, McDonald’s – not only to be official sponsors of a sporting event, but to provide much of the catering. (McDonald’s even tried to ban other suppliers from selling chips on the Olympic site.)

But, on the other, Britain’s food scene has never been in better shape. It has excellent restaurants – and not only at the top end of the scale – and thriving and wonderful farmers’ markets and street food.

It’s this which makes the decision not to open up the catering of the event to London’s food trucks, restaurants, and caterers so tragic. It is true that meals for the athletes and officials staying in the Village have been locally sourced and made from ethically-produced ingredients, and this is really great. But why the rules and regulations which actually make it more difficult for fans and spectators to buy – or bring their own – healthy food?

Of course, the athletes themselves will all be eating carefully calibrated, optimally nutritious food. There’s been a lot of coverage of the difficulties of catering for so many people who eat such a variety of different things. The idea that athletes’ performance is enhanced by what they consume – supplements, food, and drugs (unfortunately) – has become commonplace.

Even my local gym’s café – an outpost of the Kauai health food chain – serves meals which are, apparently, suited for physically active people. I’ve never tried them, partly because the thought of me as an athlete is so utterly nuts. (I’m an enthusiastic, yet deeply appalling, swimmer.)

The notion that food and performance are linked in some way, has a long pedigree. In Ancient Greece, where diets were largely vegetarian, but supplemented occasionally with (usually goat) meat, evidence suggests that athletes at the early Olympics consumed more meat than usual to improve their performance. Ann C. Grandjean explains:

Perhaps the best accounts of athletic diet to survive from antiquity, however, relate to Milo of Croton, a wrestler whose feats of strength became legendary. He was an outstanding figure in the history of Greek athletics and won the wrestling event at five successive Olympics from 532 to 516 B.C. According to Athenaeus and Pausanius, his diet was 9 kg (20 pounds) of meat, 9 kg (20 pounds) of bread and 8.5 L (18 pints) of wine a day. The validity of these reports from antiquity, however, must be suspect. Although Milo was clearly a powerful, large man who possessed a prodigious appetite, basic estimations reveal that if he trained on such a volume of food, Milo would have consumed approximately 57,000 kcal (238,500 kJ) per day.

Eating more protein – although perhaps not quite as much as reported by Milo of Croton’s fans – helps to build muscle, and would have given athletes an advantage over other, leaner competitors.

Another ancient dietary supplement seems to have been alcohol. Trainers provided their athletes with alcoholic drinks before and after training – in much the same way that contemporary athletes may consume sports drinks. But some, more recent sportsmen seem to have gone a little overboard, as Grandjean notes:

as recently as the 1908 Olympics, marathon runners drank cognac to enhance performance, and at least one German 100-km walker reportedly consumed 22 glasses of beer and half a bottle of wine during competition.

Drunken, German walker: I salute you and your ability to walk in a straight line after that much beer.

The London Olympic Village is, though, dry. Even its pub only serves soft drinks. With the coming of the modern games – which coincided with the development of sport and exercise science in the early twentieth century – diets became the subject of scientific enquiry. The professionalization of sport – with athletes more reliant on doing well in order to make a living – only served to increase the significance of this research.

One of the first studies on the link between nutrition and the performance of Olympic athletes was conducted at the 1952 games in Helsinki. The scientist E. Jokl (about whom I know nothing – any help gratefully received) demonstrated that those athletes who consumed fewer carbohydrates tended to do worse than those who ate more. Grandjean comments:

His findings may have been the genesis of the oft-repeated statement that the only nutritional difference between athletes and nonathletes is the need for increased energy intake. Current knowledge of sports nutrition, however, would indicate a more complex relationship.

As research into athletes’ diets has progressed, so fashions for particular supplements and foods have emerged over the course of the twentieth century. Increasing consumption of protein and carbohydrates has become a common way of improving performance. Whereas during the 1950s and 1960s, athletes simply ate more meat, milk, bread, and pasta, since the 1970s, a growing selection of supplements has allowed sportsmen and –women to add more carefully calibrated and targeted forms of protein and carbohydrates to their diets.

Similarly, vitamin supplements have been part of athletes’ diets since the 1930s. Evidence from athletes competing at the 1972 games in Munich demonstrated widespread use of multivitamins, although now, participants tend to choose more carefully those vitamins which produce specific outcomes.

But this history of shifting ideas around athletes’ diets cannot be understood separately from the altogether more shadowy history of doping – of using illicit means of improving one’s performance. Even the ancient Greeks and Romans used stimulants – ranging from dried figs to animal testes – to suppress fatigue and boost performance.

More recently, some of the first examples of doping during the nineteenth century come from cycling (nice to see that some things don’t change), and, more specifically, from long-distance, week-long bicycle races which depended on cyclists’ reserves of strength and stamina. Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen explain:

A variety of performance enhancing mixtures were tried; there are reports of the French using mixtures with caffeine bases, the Belgians using sugar cubes dripped in ether, and others using alcohol-containing cordials, while the sprinters specialised in the use of nitroglycerine. As the race progressed, the athletes increased the amounts of strychnine and cocaine added to their caffeine mixtures. It is perhaps unsurprising that the first doping fatality occurred during such an event, when Arthur Linton, an English cyclist who is alleged to have overdosed on ‘tri-methyl’ (thought to be a compound containing either caffeine or ether), died in 1886 during a 600 km race between Bordeaux and Paris.

Before the introduction of doping regulations, the use of performance enhancing drugs was rife at the modern Olympics:

In 1904, Thomas Hicks, winner of the marathon, took strychnine and brandy several times during the race. At the Los Angeles Olympic Games in 1932, Japanese swimmers were said to be ‘pumped full of oxygen’. Anabolic steroids were referred to by the then editor of Track and Field News in 1969 as the ‘breakfast of champions’.

But regulation – the first anti-drugs tests were undertaken at the 1968 Mexico games – didn’t stop athletes from doping – the practice simply went underground. The USSR and East Germany allowed their representatives to take performance enhancing drugs, and an investigation undertaken after Ben Johnson was disqualified for doping at the Seoul games revealed that at least half of the athletes who competed at the 1988 Olympics had taken anabolic steroids. In 1996, some athletes called the summer Olympics in Atlanta the ‘Growth Hormone Games’ and the 2000 Olympics were dubbed the ‘Dirty Games’ after the disqualification of Marion Jones for doping.

At the heart of the issue of doping and the use of supplements, is distinguishing between legitimate and illegitimate means of enhancing performance. The idea that taking drugs to make athletes run, swim, or cycle faster, or jump further and higher, is unfair, is a relatively recent one. It’s worth noting that the World Anti-Doping Agency, which is responsible for establishing and maintaining standards for anti-doping work, was formed only in 1999.

What makes anabolic steroids different from consuming high doses of protein, amino acids, or vitamins? Why, indeed, was Caster Semenya deemed to have an unfair advantage at the 2009 IAAF World Championships, but the blade-running Oscar Pistorius is not?

I’m really pleased that both Semenya and Pistorius are participating in the 2012 games – I’m immensely proud that Semenya carried South Africa’s flag into the Olympic stadium – but their experiences, as well as the closely intertwined histories of food supplements and doping in sport, demonstrate that the idea of an ‘unfair advantage’ is a fairly nebulous one.

Further Reading

Elizabeth A. Applegate and Louis E. Grivetti, ‘Search for the Competitive Edge: A History of Dietary Fads and Supplements,’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 869S-873S.

Ann C. Grandjean, ‘Diets of Elite Athletes: Has the Discipline of Sports Nutrition Made an Impact?’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 874S-877S.

Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen, ‘The History of Doping and Growth Hormone Abuse in Sport,’ Growth Hormone & IGF Research, vol. 19 (2009), pp. 320-326.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

It’s only cake

The television series which I most I want to watch at the moment is Girls. Written by and starring Lena Dunham, it follows the exploits of four young women in New York. Unlike Sex and the City, to which it is usually compared, its success is based partly on how truthful its depiction of the characters’ experience of living in New York is: that it is expensive, and not particularly glamorous. It portrays sex and relationships wincingly realistically.

I’m interested in Girls not only because it looks fantastically entertaining: it seems to me to be part of a new kind of feminism which has emerged over the past few years.

In a pair of articles for N+1, Molly Fischer has taken a look at the rise of the ‘ladyblog’ since the founding of sites like Jezebel and The Hairpin in 2007 and 2008. For many young women, these blogs – and others – have taken the place of women’s magazines. Considerably more intelligent and far better written, ladyblogs take aim at the ways in which women’s magazines create and play on women’s insecurities, as well as the values underpinning them.

But Fischer points out that ladyblogs also peddle femininities which are not always tolerant of dissent, and are often unwilling to engage in debate. She writes about the response to an earlier, more critical post:

When intimacy is your model of success, it becomes easy to assume that everyone is either a friend or a traitor. I had tried to approach the ladyblogs as an observer rather than a participant, but my writing about them in an apparently impersonal public voice, as a woman—which became a woman holding myself apart from their community of women—registered as unacceptable aggression. So, was I a spinster feminist, or just out to impress boys? This was the exact corner of the internet that seemed like it ought to know better.

I was particularly taken by her observation that the blogs’ and their readers’ tendency to refer to themselves as ‘ladies’, rather than ‘women’, signals a kind of discomfort with adult femininity. I think that this is worth exploring. In a review of Sheila Heti’s How Should a Person Be? Katie Roiphe criticises the book – a novel about a group of variously arty people in Toronto – on the grounds that Heti’s behaviour and thinking are not really befitting a thirty-five year-old woman:

One of the salient facts of Heti’s milieu…is the very young quality of the book’s philosophical speculations, the palpable feel of college students sitting on a roof marvelling at the universe and their own bon mots, though Heti herself is 35. …

The perpetual, piquant childishness, the fetishizing and prolonging of an early 20s conversation about the Meaning of Life is central to both the book’s appeal and its annoyingness. Heti’s character is working in a hair salon and thinking a lot about art and how to be ‘the ideal human’ while also hanging out with people so fascinating…that she is recording their every word for posterity.

How Should a Person Be?, Girls, even Whit Stillman’s new film Damsels in Distress, as well as the increasing number of overtly feminist blogs and publications for women, from Frankie and The Gentlewoman to The Vagenda and The Flick, are a manifestation of the new feminism of the 2010s. EJ Graff explains this particularly well:

Young women are mad as hell, and they’re not going to take it anymore.

These young women are irreverent and unashamed of talking openly about sex. They’re less focused on eliminating consumerism or beauty culture than was the Second Wave. They’re quicker to reach out across the social fault lines of race, sex, sexual orientation, disability, and other -isms. They love appropriating pop culture and wielding humour with sly commentaries like the blog Feminist Ryan Gosling or the video Shit White Girls Say to Black Girls. Their multimedia creations make Barbara Kruger’s 1980s sloganeering art (‘Your body is a battleground’) look hopelessly earnest, or earnestly hopeless.

I agree with Fischer’s argument that the use of ‘lady’ and ‘girl’ can signal a strange unwillingness to grow up – explicable, possibly, because it occurs within a wider cultural context which put enormous value on youth and youthfulness – but many of these blogs and other publications write for, and about, ‘girls’ and ‘ladies’ for other reasons. This is a deliberate reclaiming of terms which have been used to diminish, and to put down women.

As Graff makes the point, this most recent feminist wave has managed to negotiate itself out of the depoliticised impasse of third wave feminism, to a position where it expresses a genuine anger at the systematic marginalisation of women. Crucially, it is a feminism which is willing also to act and to protest – and it’s difficult to underestimate the significance of the internet in allowing these women to mobilise. Fischer refers to the emergence of an ‘online womanhood’, and I think that this is an important observation.

But as third-wave feminism was dismissed as ‘lipstick feminism’, this new wave has been dubbed ‘cupcake feminism’. On the one hand, celebrations of Women’s Day and other woman-centred events have been accused of taking the edge off campaigns for issues ranging from equal pay to increasing access to contraception and birth control, by transforming them into fun, cupcake-serving gatherings for ladies.

On the other, though, as ladyblogs have reclaimed the words lady and girl, so, arguably, have they reclaimed the cupcake. This isn’t to suggest, of course, that the popularity of cupcakes isn’t connected, at least to some extent, to a weird infantilisation of women’s food and eating habits. But one of the most interesting features of this new feminist wave is its attitude towards food and eating.

Jane Hu has written about the place of food in Girls:

if we’re looking for what’s truly universal in Dunham’s depiction of young, white, upper-middle-class life in New York City, then maybe the cupcake isn’t such a bad place to start. Eating is, after all, about as universal as it gets. … hunger, in all its manifestations, drives Girls.

The tentative title of Hannah’s memoir-in-progress is, after all, Midnight Snack. A title is supposed to be suggestive and representative of a body of work, but really all Hannah’s (unfinished) Midnight Snack indicates is that she still has not learned how or when to eat like an adult.

One of the clips from Girls makes this link between food, eating, and ladies, and girls, explicit:

This can be read in several different ways. I think that’s it’s worth noting how long the camera lingers on their ice cream-eating. How many series about women depict them eating – and enjoying it, without feeling guilty?

It’s striking how many ladyblogs feature food and recipes. The Flick has a section on food and drink, and Frankie includes at least one recipe per issue, and has several on its blog. Neither views food – as so many women’s magazines do – as something which needs to be limited and controlled. It is to be made and eaten with pleasure.

In a sense, this is a depoliticisation of food: these publications write about food because their readers are interested in it, and may enjoy cooking. It does not diminish them as feminists. They can have their cupcakes and eat them.

At the end of Margaret Atwood’s fantastically brilliant first novel The Edible Woman (1970), her protagonist Margaret McAlpin bakes a cake. Over the course of the book, Margaret – who has a degree, but works for a market-research company in Toronto, and who has a vague sense of dissatisfaction with the direction in which her life is going – becomes engaged to the eligible Peter. As she realises, slowly, that this engagement and marriage will subsume her identity in his – that she will be consumed by it (and by him) – she begins to lose her appetite: first for meat, and then, slowly, for fish, vegetables, bread, and noodles. By the end of the novel, she can’t eat anything. After a crisis, she breaks off her engagement.

She invites him to tea, to explain her decision, and serves him her cake, which she has made in the shape of a woman:

She went into the kitchen and returned, bearing the platter in front of her, carefully and with reverence, as though she was carrying something sacred in a procession, an icon or the crown on a cushion in a play. She knelt, setting the platter on the coffee-table in front of Peter.

‘You’ve been trying to destroy me, haven’t you,’ she said. ‘You’ve been trying to assimilate me. But I’ve made you a substitute, something you’ll like much better. This is what you really wanted all along, isn’t it? I’ll get you a fork,’ she added somewhat prosaically.

Peter stared from the cake to her face and back again. She wasn’t smiling.

His eyes widened in alarm. Apparently he didn’t find her silly.

When he had gone – and he went quite rapidly, they didn’t have much of a conversation after all, he seemed embarrassed and eager to leave and even refused a cup of tea – she stood looking down at the figure. So Peter hadn’t devoured it after all. As a symbol it had definitely failed. It looked up at her with its silvery eyes, enigmatic, mocking, succulent.

Suddenly she was hungry. Extremely hungry. The cake after all was only a cake. She picked up the platter, carried it to the kitchen table and located a fork. ‘I’ll start with the feet,’ she decided.

Later, her flatmate, Ainsley, reappears:

‘Marian, what have you got there?’ She walked over to see. ‘It’s a woman – a woman made of cake!’ She gave Marian a strange look.

Marian chewed and swallowed. ‘Have some,’ she said, ‘it’s really good. I made it this afternoon.’

Ainsley’s mouth opened and closed, fishlike, as though she was trying to gulp down the full implication of what she saw. ‘Marian!’ she exclaimed at last, with horror. ‘You’re rejecting your femininity!’

Marian looked back at her platter. The woman lay there, still smiling glassily, her legs gone. ‘Nonsense,’ she said. ‘It’s only a cake.’ She plunged her fork into the carcass, neatly severing the body from the head.

Yes. It’s only cake.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Children’s Food

I’m writing this post while listening to this week’s podcast of BBC Radio 4’s Food Programme. The episode is about nine year-old food writer Martha Payne, whose blog about the dinners served at her school became the cause of a strange and troubling controversy a month ago.

Martha uses her blog, NeverSeconds, to review the food she eats at school. As Jay Rayner points out, although she may criticise – rightly – much of which the school provides for lunch, NeverSeconds is not intended as a kind of school dinners hatchet job. She rates her meals according to a Food-o-Meter, taking into account how healthy, but also how delicious, they are.

As her blog has grown in popularity, children from all over the world have contributed photographs and reviews, and it’s partly this which makes Never Seconds so wonderful: it’s a space in which children can discuss and debate food.

NeverSeconds came to wider – global – notice when the Argyll and Bute Council tried to shut it down in June, after the Daily Record published an article featuring Martha cooking with the chef Nick Nairn, headlined ‘Time to fire the dinner ladies.’ The blog’s honest descriptions and pictures of some of the food served to schoolchildren can’t have pleased councillors either.

As Private Eye (no. 1317) makes the point, the council’s bizarre – and futile – attempts to silence a blog probably had as much to do with internal politicking and minor corruption as anything else, but the furore which erupted after the ban also said a great deal about attitudes towards food and children.

What is really scandalous about the blog is that it reveals how bad – how unhealthy, how heavily processed – school meals can be. When Jamie Oliver launched a campaign in 2005 to improve the quality of school dinners in the UK, his most shocking revelations were not, I think, that children were being fed Turkey Twizzlers and chips for lunch, but, rather, that the British government is willing to spend so little on what children eat at school. Last year, the state spent an average of 67p per primary school pupil per meal, per day. This rose to 88p for those in high school.

Michael Gove has recently announced another inquiry into the quality of school meals – this time headed up by the altogether posher-than-Jamie Henry Dimbleby, the founder of the Leon chain of restaurants, who also seems to spend the odd holiday with the Education Secretary in Marrakech. It’s a tough life.

But as Sheila Dillon comments during this episode of the Food Programme:

Martha Payne, a nine year-old who seems to understand better than many adults, that dinner ladies, or even individual school kitchens, are not the source of the school dinner problem. It has far deeper roots.

When did it become acceptable to serve schoolchildren junk food for lunch? The way we feed children tells us a great deal about how we conceptualise childhood. Or, put another way, what we define as ‘children’s food’ says as much about our attitudes towards food as it does about children.

The idea that children should be fed separately to adults has a relatively long pedigree. The Victorians argued that children – and women – should be fed bland, carbohydrate-heavy meals to prevent their delicate digestive systems from being exerted. Fruit, meat, spices, and fresh vegetables should be eaten only in strict moderation.

There is, of course, a disconnect between what experts – medical professionals, childrearing specialists – recommend, and what people actually eat. In the late nineteenth-century Cape Colony, for instance, the pupils at an elite girls’ school near Cape Town were fed a diet rich in red meat and fresh fruit and vegetables.

But the belief that children’s bodies are delicate and potentially vulnerable to disruption was an indicator of shifts in thinking about childhood during the mid and late nineteenth century. The notion that children need to be protected – from work, hunger, poverty, and exploitation and abuse from adults – emerged at around the same time. As children were to be shielded from potential danger, so they were to eat food which, it was believed, was ideally suited to digestive systems more susceptible to upset and illness than those of adults.

But as scientists became interested in the relationship between food and health – in nutrition, in other words – towards the end of the 1800s, paediatricians, demographers, and others concerned about high rates of child mortality during the early twentieth century began to look more closely at what children were being fed. For instance, in the 1920s and 1930s, scientists in Britain and the United States drew a connection between the consumption of unhealthy or diseased food – particularly rotten milk – and high rates of diarrhoea, then almost always fatal, among children in these countries.

They were also interested in what should constitute a healthy diet for a child. As childhood became increasingly medicalised in the early twentieth century – as pregnancy, infancy, and childhood became seen as periods of development which should be overseen and monitored by medical professionals – so children’s diets became the purview of doctors as well. As RJ Blackman, the Honorary Surgeon to the Viceroy of India (no, me neither), wrote in 1925:

Food, though it is no panacea for the multitudinous ills of mankind, can do much, both to make or mar the human body. This is particularly so with the young growing child. All the material from which his body is developed has to come from the food he eats. Seeing that he doubles or trebles his weight in the first year of life, and increases it twenty-fold by the time he reaches adult stature, it will be seen that food has much to accomplish. Naturally, if the food be poor, the growth and physique will be poor; and if good, the results will be good.

Informed by recent research into dietetics, doctors advised parents to feed their children varied diets which included as much fresh, vitamin-containing produce as possible. In a popular guide to feeding young children, The Nursery Cook Book (1929), the former nurse Mrs K. Jameson noted:

Many years ago, I knew a child who was taken ill at the age of eight years, and it was thought that one of her lungs was affected. She was taken to a children’s specialist in London. He could find nothing radically wrong, but wrote out a diet sheet. By following this…the child became well in a month or two. This shows how greatly the health is influenced by diet.

This diet, she believed, should be designed along scientific principles:

Since starting to write this book I have come across an excellent book on vitamins called ‘Food and Health’ (Professor Plimmer), and I have found it very helpful. I have endeavoured to arrange the meals to contain the necessary vitamins, as shown in the diagram of ‘A Square Meal’ at the beginning of the book.

Indeed, she went on to explain that children who were properly fed would never need medicine.

In 1925, advising mothers on how to wean their babies in the periodical Child Welfare, Dr J. Alexander Mitchell, the Secretary for Public Health in the Union of South Africa, counselled against boiling foodstuffs for too long as it ‘destroys most of the vitamins.’ He argued that children’s diets ‘should include a good proportion of proteins or fleshy foods and fats’, as well as plenty of fruit, fresh vegetables, milk, and ‘porridge…eggs, meat, juice, soups’.

What is so striking about the diets described by Mitchell, Jameson, and others is how similar they were to what adults would have eaten. Children were to eat the same as their parents, but in smaller quantities and in different proportions. For example, some doctors counselled again children being allowed coffee, while others believed that they should limit their intake of rich foods.

So what is the origin of the idea that children should be cajoled into eating healthily by making food ‘fun’? Mrs Jameson’s recipes might have cute names – she calls a baked apple ‘Mr Brownie with his coat on’ – but they’re the same food as would be served to adults. Now, our idea of ‘children’s food’ differs from that of the 1920s and 1930s. When we think of children’s food, we imagine sweets, soft white sandwich bread, pizza, hotdogs, and brightly coloured and oddly shaped foodstuffs designed to appeal to children.

As Steven Mintz argues in his excellent history of American childhood, Huck’s Raft (2004), the 1950s and 1960s were child-oriented decades. Not only were there more children as a result of the post-war baby boom, but with the growing prosperity of late twentieth-century America, more money was spent on children than ever before. Families tended to be smaller, and increasing pocket money transformed children into mini-consumers.

Children either bought, or had their parents buy for them, a range of consumer goods aimed at them: from clothes and toys, to ‘child-oriented convenience foods… – “Sugar Frosted Flakes (introduced in 1951), Sugar Smacks (in 1953), Tater Tots (in 1958), and Jiffy Pop, the stovetop popcorn (also in 1958).’

The same period witnessed a shift in attitudes towards childrearing. Families became increasingly child-centred, with meals and routines designed around the needs of children, rather than parents. In many ways, this was a reaction against the orthodoxies of the pre-War period, which tended to emphasise raising children to be obedient, well-behaved, and self-disciplined.

So the definition of children’s food changed again. For the parents of Baby Boomers, food was made to be appealing to children. Fussiness was to be accommodated and negotiated, rather than ignored. And children’s desire for food products advertised on television was to be indulged.

I am exaggerating to make a point – in the US and the UK children during the 1960s and 1970s certainly ate less junk than they do now, and this new understanding of children’s food emerged in different ways and at different times in other parts of the world – but this change represented a bonanza for the burgeoning food industry. Although the industry’s attempts to advertise to children are coming under greater scrutiny and regulation (and rightly so), it does have a vested interest in encouraging children and their parents to believe that that is what constitutes good food for children.

I think that it’s partly this shift in thinking about children’s relationship with food – that they should eat only that which they find appealing, and that children will only eat food which is ‘fun’, brightly coloured, oddly shaped, and not particularly tasty – that allowed for the tolerance of such poor school food for so long in Britain.

Martha’s blog is a powerful corrective to this: she, her classmates, and contributors all have strong opinions about what they eat, and they like a huge variety of food – some of it sweets, but most of it is pretty healthy. The irony is that in – apparently – pandering to what children are supposed to like, politicians and policy makers seem to find listening to what a child has to say, fairly difficult. If we’re to persuade children to eat well, then not only should we encourage them to talk and to think about food, but we need to listen to what they have to say about it.

Further Reading

Linda Bryder, A Voice for Mothers: The Plunket Society and Infant Welfare, 1907-2000 (Auckland: Auckland University Press, 2003).

Deborah Dwork, War is Good for Babies and Other Young Children: A History of the Infant and Child Welfare Movement in England 1898-1918 (London and New York: Tavistock Publications, 1987).

Steven Mintz, Huck’s Raft: A History of American Childhood (Cambridge, Mass.: Belknap Press, 2004).

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Aussie Rules?

A month ago I had the pleasing experience of packing for Perth. In South African slang, ‘packing for Perth’ means immigrating to Australia. In the decade that followed the transition to democracy, around 800,000 mainly white South Africans left – some for New Zealand, Britain, and the United States, but the bulk went to Australia.

Australia’s appeal to these South Africans was based on its political and economic stability, its relatively low crime rate, and also on its familiarity. Its landscape and cities feel similar to some parts of South Africa, and white, middle-class South Africans seemed have little difficulty assimilating into life in white, middle-class Australia.

Shortly after beginning university, my best friend’s family moved to Tasmania; and we knew of others who settled in Perth, where the majority of South Africans seeking permanent residence were directed. At the time, I was mystified about this enthusiasm for a country about which I knew relatively little. Neighbours and Home and Away having passed me by, when I thought of Australia I imagined the worlds of Picnic at Hanging Rock and My Brilliant Career – and also of The Castle and Strictly Ballroom. It was a rather confusing picture.

Then more recently, I became aware of Australia as a country with an enthusiasm for good food: in television series like My Restaurant Rules and MasterChef, and in the recipes books and magazines of people like Maggie Beer, Stephanie Alexander, Bill Granger, and Donna Hay. Particularly on MasterChef, Australian cooks and chefs speak often – and approvingly – of something called ‘modern Australian cooking’. I went to Australia in the hope of identifying this new cuisine. But I returned none the wiser.

I ate extremely well in Australia. I am very lucky to have friends who not only let me stay with them, but who are also amazingly good cooks. The meals I had at cafes and restaurants were excellent, and even the conference food was the best I have ever eaten. (There were spring rolls for lunch and lamingtons for tea. Enough said.)

Yet in all this, I struggled to find something that was uniquely, and particularly ‘modern Australian’ about the food I ate. I did go out of my way to consume those delicacies and dishes which either originated there or have come to be associated with the country: lamingtons and Anzac biscuits (a revelation), friands (I ate my weight’s worth in them), burgers with beetroot (up to a point), and litres and litres of flat whites, especially in Melbourne. Fruit bread is a fantastic invention. I tried Vegemite in London and decided that once was enough. And, alas, I forgot to eat a pavlova, but given the amount I did manage to consume, it was probably just as well.

A flat white in Fremantle.

I also ate an incredible omelette at a Vietnamese restaurant in Marrickville in Sydney, and a pleasingly thin-crusted pizza at an Italian joint in Melbourne’s Yarraville. Australian food is also immigrant food: it’s comprised of the cuisines of the Greeks, Italians, Vietnamese, Chinese, and others who settled in the country over the past century or so.

But ‘modern Australian’? I’m not sure that I ate that – possibly it’s only to be found in high-end restaurants, none of which I could afford. One culinary tradition which I did not see – at restaurants or in the cookery sections of bookshops – was Aboriginal cooking. Although Colin Bannerman identifies a small resurgence of interest in ‘bush tucker’, it’s telling that this cuisine is not included in mainstream Australian recipe books or cookery programmes. It isn’t modern Australian.

I don’t want to draw the obvious – glib – conclusion that this is suggestive of how Aboriginals have been ostracised from Australian society. Aboriginals are socially and economically marginalised, and suffer disproportionately from appallingly high rates of alcoholism, domestic violence, drug abuse, and other social problems, but I don’t think that Australian cooks and chefs ignore their cuisine out of a desire to exclude them further (unless I’m being stunningly naïve).

I think that this unwillingness to explore Aboriginal cooking stems from ignorance and a wariness of the complicated politics of engaging with a different society’s culinary traditions. More importantly, it’s also the product of how a twenty-first century Australianness is being constructed in relation to food and cooking. It’s for this reason that I’m interested in this idea of modern Australian cuisine.

Australian cooking queen Maggie Beer is fulsome in her praise of Australia. In her recipe books, which tend to focus on her farm in South Australia’s Barossa Valley, she argues that fresh Australian produce is key to the success of not only her recipes, but also her restaurant and food business. Her understanding of an Australian culinary tradition does not include Aboriginal cuisine, but is, rather, rooted in an appreciation for the country’s landscape and agriculture.

Organic potatoes in Melbourne’s Victoria Market.

Although she may use ingredients which are unique to Australia – like yabbies – or which grow there in abundance – such as quinces – her cooking is overwhelmingly European in nature: it draws its inspiration from the culinary traditions of France and Italy. Adrian Peace sums up this rethinking of an Australian food heritage particularly well in an article about the Slow Food Movement’s popularity in the Barossa Valley:

Both ‘tradition’ and ‘heritage’ became intrinsic to Barossa Slow’s discourse: ‘The Barossa is the heart of Australian wine and home to the country’s oldest and richest food traditions. The combination of this rich European heritage and the fresh vitality of Australia is embodied in its lifestyle and landscape.’ Aboriginal settlement and indigenous food were thus instantly erased in favour of a historical perspective in which nothing of cultural consequence preceded the arrival of Europeans and their imported foodstuffs. With this historical baseline in place, an avalanche of terms and phrases could be unleashed to drive home the idea of a historically encompassing regional culture in which food had played a prominent part. ‘Oldest food traditions,’ ‘rich in food traditions,’ ‘the heritage of food,’ ‘rich European heritage,’ and (of particular note) ‘the preservation of culinary authenticity’ were some of the phrases that entered into circulation.

Younger, city-based food writers like Donna Hay and Bill Granger place as much emphasis on buying local Australian produce, even if their recipes draw inspiration from more recent immigrant cuisines, primarily those of southeast Asia – Melbourne and Sydney have substantial Chinatowns – and the southern Mediterranean.

All of these writers claim that their cooking, which is drawn from the cuisines of the immigrants who’ve settled in Australia, is ‘authentically’ Australian partly because they use local produce and advocate seasonal eating.

Australian garlic at Victoria Market.

Ironically, if this is modern Australian cooking, then it is very similar to the Australian cuisine of the early twentieth century, during a period in which Australia was formulating a new, united identity after federation in 1901. The Anzac biscuit – a delicious combination of oats, golden syrup, butter, and desiccated coconut – can be seen as symbolic of this early Australian identity. Baked by the wives, sisters, and mothers of the members of the Australian and New Zealand Army Corps during the first world war, the biscuits became closely associated with the disaster at Gallipoli in 1915, when 8,141 Australian troops were killed in what was, in retrospect, a pointless battle. Sian Supski explains:

The biscuits have come to represent the courage of the soldiers at Gallipoli and to signify the importance of the role women played on the homefront. However, within this narrative is also a sleight of hand: Anzac biscuits link Australians to a time past, to a time that is regarded as ‘the birth of our nation’. In this sense, Anzac biscuits link Australians powerfully and instantly to a time and place that is regarded as the heart of Australian national identity. In the words of Graham Seal, ‘Anzac resonates of those things that most Australians have continued to hold dear about their communal sense of self.’

Anzac biscuits are a kind of culinary symbol of Australia – a foodstuff connected to the forging of the Australian nation. But for all their Australianness, they are also strongly suggestive of Australia’s immigrant roots and global connections: there is some evidence to suggest that they were based on Scottish recipes, and they were sent to soldiers fighting what was, in many ways, an imperial conflict.

Australian cooking during the nineteenth and early twentieth centuries emphasized the country’s position within the Empire: the country cooking described in early recipe books was British cuisine adapted, to some extent, for Australian circumstances. Publications like Mina Rawson’s Queensland Cookery and Poultry Book (1878) did acknowledge the quality of local produce, and even included recipes for jams made from indigenous berries. Although, like elites all over the world, the Australian upper middle-classes aspired to eat a rarefied French cuisine, everyone else cooked an approximation of what they ate at ‘home’ (or ‘Home’). The Sunday roast remained the highlight of the week’s eating; heavy puddings featured even in summer; and teatime was a significant moment in the day.

At the same time, Australia’s economy was becoming increasingly dependent on the export of food: innovations in refrigeration meant that fresh produce could be shipped around the world. Australia sent meat, fruit, and vegetables to Britain. The posters of the Empire Marketing Board – which was established in 1926 to promote trade within the British Empire – portrayed Australia as a land of abundance. The British children sent to Australia between the second world war and 1967 were told that they were going to a land of ‘oranges and sunshine’.

So this earlier Australian culinary tradition also mingled Australian produce with a foreign – this time British – culinary tradition in the name of producing something ‘authentically’ Australian.

In Sydney’s Chinatown.

For all its attempts to associate a modern Australianness with a cosmopolitan and sophisticated liking for, and knowledge of, the cooking of southeast Asia and other regions, modern Australian cooking is very similar to that of the Australian cuisine of the early twentieth century – of an Australia anxious to assert its position within the Empire and to prove its status as a ‘civilised’ nation through ‘civilised’ eating.

Both of these traditions ground themselves in an appreciation for an empty landscape: one that is devoid of human – particularly Aboriginal – life, but that is bursting with good quality fresh produce, most of which was, ironically, introduced from abroad.

Further Reading

I am very grateful to Alex Robinson who recommends two particularly good histories of food and cooking in Australia:

Barbara Santich, Bold Palates: Australia’s Gastronomic Heritage (Adelaide: Wakefield Press 2012).

Michael Symons, One Continuous Picnic: A Gastronomic History of Australia (Melbourne: Melbourne University Press, 2007).

Sources cited here:

Colin Bannerman, ‘Indigenous Food and Cookery Books: Redefining Aboriginal Cuisine,’ Journal of Australian Studies, vol. 30, no. 87 (2006), pp. 19-36.

Adrian Peace, ‘Barossa Slow: The Representation and Rhetoric of Slow Food’s Regional Cooking,’ Gastronomica: The Journal of Food and Culture, vol. 16, no. 1 (Winter 2006), pp. 51-59.

Barbara Santich, ‘The High and the Low: Australian Cuisine in the Late Nineteenth and Early Twentieth Centuries,’ Journal of Australian Studies, vol. 30, no. 87 (2006), pp. 37-49.

Sian Supski, ‘Anzac Biscuits – A Culinary Memorial,’ Journal of Australian Studies, vol. 30, no. 87 (2006), pp. 51-59.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Gourmet Traveller

One of the perks of academia is being able to travel for research, study, and conferences. The odd side-effect of this is that academics become unwitting experts in the quality of travel food – by which I mean the meals available in airports and railway stations and on planes and trains.

I’ve never really understood the griping about airline meals: they’re certainly not the most inspired dinners and, particularly, breakfasts I’ve ever eaten – and I’ve probably drunk the worst coffee in the world while on long-haul flights between Cape Town and London – but I haven’t ever had anything that was actively offensive.

In fact, I rather liked the lamb biryani with cashew nuts and caramelised bits of onion I ate on a flight from Qatar to Joburg, and the macadamia and honey ice cream I had while flying from Perth to Melbourne. I’ve had considerably worse food on trains. On a nine-hour journey between Montrose in northern Scotland and London, the dining car was closed because the tea urn was broken. Which, although an interesting commentary on the centrality of tea to the British diet, was nevertheless unpleasant. A woman can subsist on crisps for only so long.

I wonder why there’s so much complaining about airline food. I think it has something to do with the overall unpleasantness of economy-class flying – the cramped seats, the mucky loos, and the dismaying misfortune of being stuck beside fellow passengers with strange personal habits – but it’s also connected, to some extent, with the ways in which we understand travel.

I’ve just returned from a month in Australia – it was amazing – and became particularly aware of how much I spend on food when I travel because it’s probably the most expensive country I’ve ever visited. But I still went out of my way to eat friands and Anzac cookies and to drink fantastic coffee to try to understand the cities I visited in Australia.

There are few non-fiction genres which blur so easily into each other as food and travel writing – as attested by the continuing popularity of magazines like the Australian Gourmet Traveller, and the legion of food-and-travel cookery books and blogs. The best food writing is a kind of inadvertent travel writing. Claudia Roden’s writing on the Middle East and North Africa, Fuchsia Dunlop on China, Madhur Jaffrey on India, and, to a lesser extent, Elizabeth David’s writing on France, are as much introductions to these countries and regions at particular moments in time, as they are recipe books.

And it’s striking how much travel writing focusses on food. One of the most memorable sections of Robert Byron’s The Road to Oxiana (1937) – by far my favourite travel narrative ever – features a blue porcelain bowl of chicken mayonnaise.

It was in Isfahan I decided sandwiches were insupportable, and bought a blue bowl, which Ali Asgar used to fill with chicken mayonnaise before starting on a journey. Today there had been treachery in the Gastrell’s kitchen, and it was filled with mutton. Worse than that, we have run out of wine.

Later, stranded in the middle of the night and in the freezing cold on the road between Herat and Murghab, Byron and his travelling companions take refuge in a makeshift tent after their car breaks down:

Quilts and sheep-skins replaced our mud-soaked clothes. The hurricane lantern, suspended from a strut in the hood, cast an appropriate glow on our dinner of cold lamb and tomato ketchup out of the blue bowl, eggs, bread, cake, and hot tea. Afterwards we settled into our corners with two Charlie Chan detective stories.

Byron uses food to suggest his and his companions’ feelings at particular moments of the journey. Relieved to have reached Maimana – now on the Afghan border with Turkmenistan – he and Christopher Sykes are treated to a feast:

The Governor of Maimena was away at Andkhoi, but his deputy, after refreshing us with tea, Russian sweets, pistachios, and almonds, led us to a caravanserai off the main bazaar, a Tuscan-looking old place surrounded by wooden arches, where we have a room each, as many carpets as we want, copper basins to wash in, and a bearded factotum in high-heeled top-boots who laid down his rifle to help with the cooking.

It will be a special dinner. A sense of well-being has come over us in this land of plenty. Basins of milk, pilau with raisins, skewered kabob well salted and peppered, plum jam, and some new bread have already arrived from the bazaar; to which we have added treats of our own, patent soup, tomato ketchup, prunes in gin, chocolate, and Ovaltine. The whisky is lasting out well.

Byron is less interested in what the people around him are eating, than in how food reflects his experiences of his journey through the Middle East and Central Asia. Writing in 1980, in an essay included in the collection What am I doing here, Bruce Chatwin uses food to emphasise his sense of what was lost – culturally, socially – during the communist revolution in Afghanistan:

And we shall lose the tastes – the hot, coarse, bitter bread; the green tea flavoured with cardamoms; the grapes we cooled in the snow-melt; and the nuts and dried mulberries we munched for altitude sickness.

His elegy for Afghanistan is problematic on so many levels – his deliberate misunderstanding of Afghan politics, his romanticising of pre-1960s Afghanistan, and Chatwin’s own dubious reputation for factual accuracy – but it’s an evocative piece of writing which conjures up what feels like a realistic and layered portrayal of the regions of Afghanistan which Chatwin visited.

Describing food is absolutely integral to this: unlike foreign religious ceremonies or social customs, we can all sample – or imagine sampling – the cuisines of other societies. Food allows us some purchase on ways of living which are unfamiliar to us: we can use food to try to understand a different society, and also to judge it.

In her account of a journey through parts of West Africa in the mid-1890s, Mary Kingsley used food – this time cannibalism – to explain the what she perceived to be the ‘backwardness’ of Fang society:

It is always highly interesting to observe the germ of any of our own institutions existing in the culture of a lower race.  Nevertheless it is trying to be hauled out of one’s sleep in the middle of the night, and plunged into this study.  Evidently this was a trace of an early form of the Bankruptcy Court; the court which clears a man of his debt, being here represented by the knife and the cooking pot; the whitewashing, as I believe it is termed with us, also shows, only it is not the debtor who is whitewashed, but the creditors doing themselves over with white clay to celebrate the removal of their enemy from his sphere of meretricious activity.  This inversion may arise from the fact that whitewashing a creditor who was about to be cooked would be unwise, as the stuff would boil off the bits and spoil the gravy.  There is always some fragment of sound sense underlying African institutions.

Uncivilised – in this case, taboo-breaking – food and eating habits suggest an uncivilised society.

When I was in Perth, I dropped into the fantastic New Edition bookshop in William Street. Having taken photographs of the incredible mural which covers the shop’s back wall, I was afflicted with guilt – and also the same desperate desire that I feel in most independent bookshops for it to survive and flourish (which makes visiting independent bookshops needlessly stressful) – so I bought a book: a small, light collection of Italo Calvino’s essays, Under the Jaguar Sun (1983).

The three essays which comprise the collection are the germ of a longer book which Calvino had planned to write on the five senses. He completed only these three before his death, and the titular essay, happily, focuses on the sensation of taste. It’s about a couple who visit Oaxaca in Mexico. Their interest in the country’s cuisine becomes, gradually, the purpose of the holiday itself:

From one locality to the next the gastronomic lexicon varied, always offering new terms to be recorded and new sensations to be defined. …we found guacamole, to be scooped up with crisp tortillas that snap into many shards and dip like spoons into the thick cream (the fat softness of the aguacate – the Mexican national fruit, known to the rest of the world under the distorted name of ‘avocado’ – is accompanied and underlined by the angular dryness of the tortilla, which, for its part, can have many flavours, pretending to have none); then guajote con mole pablano – that is, turkey with Puebla-style mole sauce, one of the noblest among the many moles, and most laborious (the preparation never takes less than two days), and most complicated, because it requires several different varieties of chile, as well as garlic, onion, cinnamon, cloves, pepper, cumin, coriander, and sesame, almonds, raisins, and peanuts, with a touch of chocolate; and finally quesadillas (another kind of tortilla, really, for which cheese is incorporated in the dough, garnished with ground meat and refried beans).

This obsession with the country’s food coincides, unexpectedly, with their shared enthusiasm for Mexico’s Pre-Columbian past. After a visit to a ‘complex of ruins’ in Monte Albán, where their guide implies that the losers of a ballgame played at one of the ruined temples were not only ritually slaughtered, but also eaten by the temple’s priests and the victorious team, Olivia, the narrator’s partner, becomes preoccupied with discovering how these human remains were prepared. The story implies that her desire to eat ever-more exotic Mexican dishes stems from her belief – never articulated – that some remnant of these cannibalistic feasts must exist within contemporary Mexican cooking.

The narrator reflects:

the true journey, as the introjection of an ‘outside’ different from our normal one, implies a complete change of nutrition, a digesting of the visited country – its fauna and flora and its culture (not only the different culinary practices and condiments but the different implements used to grind the flour or stir the pot) – making it pass between the lips and down the oesophagus. This is the only kind of travel that has a meaning nowadays, when everything visible you can see on television without rising from your easy chair.

For Olivia, eating becomes a way, literally, to imbibe the culture, politics, and history of Mexico. If she can’t be Mexican, then she can, physically, become closer to Mexico – its land and people – itself.

I don’t, obviously, advocate cannibalism as part of the average tourist itinerary – it’s illegal in most countries, for one thing – but I think that this idea of ‘eating’ a country is a useful way of exploring how we use food to construct national identities.

In some ways, food stands in for a society: we eat piles of pancakes with bacon and maple syrup in the United States as a way of engaging with what many believe to be an excessive, consumerist society. Travellers who think of themselves as being in pursuit of the ‘real’ – unpredictable, utterly unfamiliar, occasionally dangerous – India eat the delicious, yet potentially diarrhoea-inducing, street food of country: eating the more familiar offerings at hotels signifies a failure to leave the tourist bubble. Since the 1940s and 1950s, France has promoted its cuisine as a symbol of its national culture. (Something which Charles de Gaulle may have been thinking about when he wondered how he would govern nation that has two hundred and forty-six different kinds of cheese.) French food is sophisticated, so French society is sophisticated.

There are grains of truth in all these stereotypes, but they remain that – simplified and often clichéd understandings of complex societies. They are also, largely, not a real reflection of how most people eat: they exclude the ingredients bought at supermarkets, and the meals eaten at fast food joints. So if we want, truly, to understand countries and societies through their food, we have to be willing to eat that which is, potentially, less interesting and, perhaps, less enticing, than the exotic meals described in travel books.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.