I don’t even know where to begin with this.
Posts tagged ‘diet’
My expectations of the London Olympics’ opening ceremony were so low that, I suppose, I would have been impressed if it had featured Boris as Boudicca, driving a chariot over the prostate figures of the Locog committee. (Actually, now that I think about it, that would have been fairly entertaining.)
Appalled by the organising committee’s slavishly sycophantic attitude towards its sponsors and their ‘rights’ – which caused them to ban home knitted cushions from being distributed to the Olympic athletes, and to require shops and restaurants to remove Olympic-themed decorations and products – as well the rule that online articles and blog posts may not link to the official 2012 site if they’re critical of the games, the decision to make the official entrance of the Olympic site a shopping mall, and the creation of special lanes for VIP traffic, I wasn’t terribly impressed by the London Olympics.
But watching the opening ceremony last night, I was reduced to a pile of NHS-adoring, Tim Berners-Lee worshipping, British children’s literature-loving goo. Although a reference to the British Empire – other than the arrival of the Windrush – would have been nice, I think that Danny Boyle’s narrative of British history which emphasised the nation’s industrial heritage, its protest and trade union movements, and its pop culture, was fantastic.
As some commentators have noted, this was the opposite of the kind of kings-and-queens-and-great-men history curriculum which Michael Gove wishes schools would teach. Oh and the parachuting Queen and Daniel Craig were pretty damn amazing too.
There was even a fleeting, joking reference to the dire quality of British food during the third part of the ceremony. There was something both apt, but also deeply ironic about this. On the one hand, there has been extensive coverage of Locog’s ludicrous decision to allow manufacturers of junk food – Coke, Cadbury’s, McDonald’s – not only to be official sponsors of a sporting event, but to provide much of the catering. (McDonald’s even tried to ban other suppliers from selling chips on the Olympic site.)
But, on the other, Britain’s food scene has never been in better shape. It has excellent restaurants – and not only at the top end of the scale – and thriving and wonderful farmers’ markets and street food.
It’s this which makes the decision not to open up the catering of the event to London’s food trucks, restaurants, and caterers so tragic. It is true that meals for the athletes and officials staying in the Village have been locally sourced and made from ethically-produced ingredients, and this is really great. But why the rules and regulations which actually make it more difficult for fans and spectators to buy – or bring their own – healthy food?
Of course, the athletes themselves will all be eating carefully calibrated, optimally nutritious food. There’s been a lot of coverage of the difficulties of catering for so many people who eat such a variety of different things. The idea that athletes’ performance is enhanced by what they consume – supplements, food, and drugs (unfortunately) – has become commonplace.
Even my local gym’s café – an outpost of the Kauai health food chain – serves meals which are, apparently, suited for physically active people. I’ve never tried them, partly because the thought of me as an athlete is so utterly nuts. (I’m an enthusiastic, yet deeply appalling, swimmer.)
The notion that food and performance are linked in some way, has a long pedigree. In Ancient Greece, where diets were largely vegetarian, but supplemented occasionally with (usually goat) meat, evidence suggests that athletes at the early Olympics consumed more meat than usual to improve their performance. Ann C. Grandjean explains:
Perhaps the best accounts of athletic diet to survive from antiquity, however, relate to Milo of Croton, a wrestler whose feats of strength became legendary. He was an outstanding figure in the history of Greek athletics and won the wrestling event at five successive Olympics from 532 to 516 B.C. According to Athenaeus and Pausanius, his diet was 9 kg (20 pounds) of meat, 9 kg (20 pounds) of bread and 8.5 L (18 pints) of wine a day. The validity of these reports from antiquity, however, must be suspect. Although Milo was clearly a powerful, large man who possessed a prodigious appetite, basic estimations reveal that if he trained on such a volume of food, Milo would have consumed approximately 57,000 kcal (238,500 kJ) per day.
Eating more protein – although perhaps not quite as much as reported by Milo of Croton’s fans – helps to build muscle, and would have given athletes an advantage over other, leaner competitors.
Another ancient dietary supplement seems to have been alcohol. Trainers provided their athletes with alcoholic drinks before and after training – in much the same way that contemporary athletes may consume sports drinks. But some, more recent sportsmen seem to have gone a little overboard, as Grandjean notes:
as recently as the 1908 Olympics, marathon runners drank cognac to enhance performance, and at least one German 100-km walker reportedly consumed 22 glasses of beer and half a bottle of wine during competition.
Drunken, German walker: I salute you and your ability to walk in a straight line after that much beer.
The London Olympic Village is, though, dry. Even its pub only serves soft drinks. With the coming of the modern games – which coincided with the development of sport and exercise science in the early twentieth century – diets became the subject of scientific enquiry. The professionalization of sport – with athletes more reliant on doing well in order to make a living – only served to increase the significance of this research.
One of the first studies on the link between nutrition and the performance of Olympic athletes was conducted at the 1952 games in Helsinki. The scientist E. Jokl (about whom I know nothing – any help gratefully received) demonstrated that those athletes who consumed fewer carbohydrates tended to do worse than those who ate more. Grandjean comments:
His findings may have been the genesis of the oft-repeated statement that the only nutritional difference between athletes and nonathletes is the need for increased energy intake. Current knowledge of sports nutrition, however, would indicate a more complex relationship.
As research into athletes’ diets has progressed, so fashions for particular supplements and foods have emerged over the course of the twentieth century. Increasing consumption of protein and carbohydrates has become a common way of improving performance. Whereas during the 1950s and 1960s, athletes simply ate more meat, milk, bread, and pasta, since the 1970s, a growing selection of supplements has allowed sportsmen and –women to add more carefully calibrated and targeted forms of protein and carbohydrates to their diets.
Similarly, vitamin supplements have been part of athletes’ diets since the 1930s. Evidence from athletes competing at the 1972 games in Munich demonstrated widespread use of multivitamins, although now, participants tend to choose more carefully those vitamins which produce specific outcomes.
But this history of shifting ideas around athletes’ diets cannot be understood separately from the altogether more shadowy history of doping – of using illicit means of improving one’s performance. Even the ancient Greeks and Romans used stimulants – ranging from dried figs to animal testes – to suppress fatigue and boost performance.
More recently, some of the first examples of doping during the nineteenth century come from cycling (nice to see that some things don’t change), and, more specifically, from long-distance, week-long bicycle races which depended on cyclists’ reserves of strength and stamina. Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen explain:
A variety of performance enhancing mixtures were tried; there are reports of the French using mixtures with caffeine bases, the Belgians using sugar cubes dripped in ether, and others using alcohol-containing cordials, while the sprinters specialised in the use of nitroglycerine. As the race progressed, the athletes increased the amounts of strychnine and cocaine added to their caffeine mixtures. It is perhaps unsurprising that the first doping fatality occurred during such an event, when Arthur Linton, an English cyclist who is alleged to have overdosed on ‘tri-methyl’ (thought to be a compound containing either caffeine or ether), died in 1886 during a 600 km race between Bordeaux and Paris.
Before the introduction of doping regulations, the use of performance enhancing drugs was rife at the modern Olympics:
In 1904, Thomas Hicks, winner of the marathon, took strychnine and brandy several times during the race. At the Los Angeles Olympic Games in 1932, Japanese swimmers were said to be ‘pumped full of oxygen’. Anabolic steroids were referred to by the then editor of Track and Field News in 1969 as the ‘breakfast of champions’.
But regulation – the first anti-drugs tests were undertaken at the 1968 Mexico games – didn’t stop athletes from doping – the practice simply went underground. The USSR and East Germany allowed their representatives to take performance enhancing drugs, and an investigation undertaken after Ben Johnson was disqualified for doping at the Seoul games revealed that at least half of the athletes who competed at the 1988 Olympics had taken anabolic steroids. In 1996, some athletes called the summer Olympics in Atlanta the ‘Growth Hormone Games’ and the 2000 Olympics were dubbed the ‘Dirty Games’ after the disqualification of Marion Jones for doping.
At the heart of the issue of doping and the use of supplements, is distinguishing between legitimate and illegitimate means of enhancing performance. The idea that taking drugs to make athletes run, swim, or cycle faster, or jump further and higher, is unfair, is a relatively recent one. It’s worth noting that the World Anti-Doping Agency, which is responsible for establishing and maintaining standards for anti-doping work, was formed only in 1999.
What makes anabolic steroids different from consuming high doses of protein, amino acids, or vitamins? Why, indeed, was Caster Semenya deemed to have an unfair advantage at the 2009 IAAF World Championships, but the blade-running Oscar Pistorius is not?
I’m really pleased that both Semenya and Pistorius are participating in the 2012 games – I’m immensely proud that Semenya carried South Africa’s flag into the Olympic stadium – but their experiences, as well as the closely intertwined histories of food supplements and doping in sport, demonstrate that the idea of an ‘unfair advantage’ is a fairly nebulous one.
Elizabeth A. Applegate and Louis E. Grivetti, ‘Search for the Competitive Edge: A History of Dietary Fads and Supplements,’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 869S-873S.
Ann C. Grandjean, ‘Diets of Elite Athletes: Has the Discipline of Sports Nutrition Made an Impact?’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 874S-877S.
Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen, ‘The History of Doping and Growth Hormone Abuse in Sport,’ Growth Hormone & IGF Research, vol. 19 (2009), pp. 320-326.
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Today’s City Press includes a fantastically interesting article about the increased incidence of obesity in post-1994 South Africa. The piece explores the links between the country’s transition to democracy and the fact that 61% of all South Africans – 70% of women over the age of 35, 55% of white men 15 years and older, and a quarter of all teenagers – are obese or overweight.
The reasons for these incredibly high levels of obesity are, as the article acknowledges, complex. In many ways, South Africa conforms to a pattern emerging throughout the developing world. In a report published a few months ago, the World Health Organisation noted that lifestyle-related diseases – like diabetes, high blood pressure, heart disease, and obesity – are now among the main causes of death and disease in developing nations. These diseases of affluence are no longer limited to the West.
For the new South African middle classes, fast food and branded processed products, like Coke, are markers of sophistication: of having ‘made it’ in this increasingly prosperous society. But, as in the rest of the world, those at the top of the social scale tend not to be overweight:
contrary to popular myth, obesity is not a ‘rich man’s disease’.
Indeed, the most affluent urbanites can get into their SUVs and drive to gym or to Woolies food hall where, for a price, they can load up their trolleys with fresh, top-quality groceries – from free-range chickens to organic lemons.
This means, says [Prof Salome] Kruger, that ‘the highest income earners are thinner’.
For urban dwellers who earn less, fresh food is usually more difficult, and expensive, to buy than processed non-food:
But for your average city dweller – earning money, but not necessarily enough to own a car to get them out to the major supermarket malls – food is where you find it.
Typically, this is in small corner shops selling a limited, and often more expensive, range of fresh foods. Fruit and veg can be hard to find among the toothpaste and toilet paper spaza staples.
‘R15!’ It’s taxi fare from Orlando to the Pick n Pay in Soweto’s Maponya Mall – and it was 25-year-old road worker Lindiwe Xorine’s reply when City Press asked her how far it was to the nearest supermarket.
We call these areas where access to fresh food is limited, ‘food deserts’. It’s entirely possible to buy fruit, vegetables, and free-range meat in South African cities, but high prices and bad transport infrastructure limit people’s ability to purchase these products.
The migration of South Africans from rural to urban areas has been a key factor in the nation’s radical change of lifestyle habits.
Twenty years ago, restricted by apartheid laws, just 10% of black South Africans lived in urban areas. Today, more than 56% do.
Alison Feeley, a scientist at the Medical Research Council, says this massive shift to a fast-paced urban life has resulted in dietary patterns shifting just as dramatically from ‘traditional foods to fast foods’.
But this isn’t the first time that South Africa, or indeed other countries, has had to cope with the impact of urbanisation on people’s diets. During the nineteenth century, industrialisation caused agricultural workers to abandon farming in their droves, and to move to cities in search of employment, either in factories or in associated industries. In Britain, this caused a drop in the quality of urban diets. Food supplies to cities were inadequate, and the little food that the new proletariat could afford was monotonous, meagre, and lacking in protein and fresh fruit and vegetables.
One of the effects of this inadequate diet was a decrease in average height – one of the best indicators of childhood health and nutrition – among the urban poor in Victorian cities. In fact, British officers fighting the South African War (1899-1902) had to contend with soldiers who were physically incapable of fighting the generally fitter, stronger, and healthier Boer forces, most of whom had been raised on diets rich in animal protein.
This link between industrialisation, urbanisation, and a decline in the quality of city dwellers’ diets is not inevitable. For middle-class Europeans in cities like London, Paris, and Berlin, industrialised transport and food production actually increased the variety of food they could afford. In the United States, from the second half of the nineteenth century onwards, a burgeoning food industry benefitted poorer urbanites as well. Processed food was cheap and readily available. Impoverished (and hungry) immigrants from Eastern Europe, Ireland, and Italy were astonished by the variety and quantity of food they could buy in New York, Detroit, and San Francisco.
It’s difficult to identify similar patterns in South Africa. We know that the sudden growth of Kimberley and Johannesburg after the discovery of diamonds (1867) and gold (1882) stimulated agriculture in Griqualand West and the South African Republic. Farmers in these regions now supplied southern Africa’s fastest growing cities with food. The expansion of Kimberley and Johannesburg as a result of the mineral revolution was different from that of London or New York because their new populations were overwhelmingly male – on the Witwatersrand, there were roughly ninety men for every woman – and highly mobile. These immigrants from the rest of Africa, Europe, Australia, and the United States had little intention of settling in South Africa. As a result of this, it’s likely that these urban dwellers weren’t as badly effected by poor diets as their compatriots in the industrialised cities of the north Atlantic.
Cape Town’s slums and squatter settlements were, though, populated by a new urban poor who migrated with their families to the city during the final three decades of the nineteenth century. Most factory workers were paid barely enough to cover their rent. Mr W. Dieterle, manager of J.H. Sturk & Co., a manufacturer of snuff and cigars, said of the young women he employed:
It would seem incredible how cheaply and sparsely they live. In the mornings they have a piece of bread with coffee, before work. We have no stop for breakfast, but I allow them to stand up when they wish to eat. Very few avail themselves of this privilege. They stay until one o’clock without anything, and then they have a piece of bread spread with lard, and perhaps with the addition of a piece of fish.
This diet – heavy on carbohydrates and cheap stimulants (like coffee), and relatively poor in protein and fresh produce – was typical of the city’s poor. It wasn’t the case that food was unavailable: it was just that urban workers couldn’t afford it.
In fact, visitors to the Cape during this period commented frequently on the abundance and variety of fruit, vegetables, and meat on the tables of the middle classes. White, middle-class girls at the elite Huguenot Seminary in Wellington – a town about 70km from Cape Town – drank tea and coffee, ate fruit, and smeared sheep fat and moskonfyt (syrupy grape jam) on their bread for breakfast and supper. A typical lunch consisted of soup, roasted, stewed, curried, or fried meat (usually mutton), three or four vegetables, rice, and pudding.
It’s also worth noting that the Seminary served its meals during the morning, the middle of the day, and in the evening – something which was relatively new. Industrialisation caused urban workers’ mealtimes to change. Breakfast moved earlier in the day – from the middle of the morning to seven or eight o’clock – lunch (or dinner) shifted to midday from the mid-afternoon, and dinner (or tea) emerged as a substantial meal at the end of the day.
Factory workers in Cape Town ate according to this new pattern as well. The difference was the quality of their diet. A fifteen year-old white, middle-class girl in leafy Claremont who had eaten an ample, varied diet since early childhood was taller and heavier than her black contemporaries in Sturk’s cigar factory. In all likelihood, she would have begun menstruating earlier, and would have recovered from illness and, later, childbirth far more quickly than poorer young women of the same age. She would have lived for longer too.
Urbanisation changes the ways in which we eat: we eat at different times and, crucially, we eat new and different things. By looking at a range of examples from the nineteenth century, we can see that this change isn’t necessarily a bad thing. The industrial revolution contributed to the more varied and cheaper diets of the middle classes. Industrialised food production and transport caused the urban poor in the United States to eat better than many of those left behind in rural areas, for example. But it’s also clear that it exacerbates social inequality. In the 1800s, the poor had too little to eat and that which they did have was not particularly nutritious. Children raised on these diets were shorter and more prone to illness than those who ate more varied, plentiful, and protein-rich food. Now, the diets available to the poor in urbanising societies are as bad, even if the diseases they contribute to are caused by eating too much rather than too little.
Most importantly, we have an abundance of food in our growing cities. Just about everyone can afford to eat. The point is that only a minority can afford good, fresh food, and have the time, knowledge, and equipment to prepare it. Food mass produced in factories helped Europe and North America’s cities to feed their urban poor a hundred years ago. I’m not sure if that’s the best solution for the twenty-first century.
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
I love this video – it’s an overview of a century of fashion, music, and dance in London’s East End:
It’s not an art installation. It’s not part of a community project. It’s an ad. For a shopping mall. And this isn’t any mall – it’s Europe’s biggest, and one of the key developments in the Olympic site in Stratford. In fact, it seems that most of the spectators attending next year’s Summer Olympics will enter the games through Westfield Stratford City: its casino, 300 shops, 50 restaurants, three hotels, and 17 cinema screens.
I’m not a massive fan of shopping malls, and said as much when I posted this video on Facebook. And then my friend Jean-François, who’s an architect, made the point that the development will create a massive 10,000 jobs, and has funded literacy classes for the astonishingly high number of applicants who seemed to be illiterate. In an area as deprived as Stratford, surely this shopping centre could only be a Good Thing?
There has been a great deal of criticism of the way in which Stratford has been transformed by the Olympic site. I don’t want to romanticise life in a very poor borough of London, and I’m not sure that commentators like Iain Sinclair – who has been vociferous in his opposition to the 2012 Olympic bid – offer much in the way of ideas for providing jobs, decent housing, and education for the area. But I feel uncomfortable about the way that a temple to consumerism seems to be offered up as the only possible way of raising living standards in Stratford. As Suzanne Moore – not, admittedly, my favourite columnist – wrote in yesterday’s Guardian:
Next week a new Westfield opens. It’s not in west London, it’s in the east, in Stratford. It will cash in on the Olympics. Is this what this deprived area really needs? Another giant, weatherless mall that has exactly the same shops as everywhere else? Maybe this deliberately disorientating social space will be a place of connection and hope. Maybe it will offer the local youth something other than an expensive bowling alley, a multiplex and some minimum-wage jobs.
But is this just a case of lefty, middle-class squeamishness? When I buy a Margot Molyneux blouse from Mungo & Jemima, or even a dress from an upmarket chain like White Stuff or online store like Toast, it’s not any ‘better’ than purchasing a t-shirt from Mr Price. Both decisions support people who designed and made the garment. When I buy from small, local grocers and food shops, it’s partly because of a belief that this is good for our food system, but it also says something about me – about how I choose to constitute my identity in relation to a particular way of thinking about being an ‘ethical’ shopper. However critical I may be of consumerism, I am, inevitably, bound up in it.
I am interested in the shift from defining people who buy things from shops as ‘customers’ to being described as ‘consumers’. There’s a growing collection of historians interested in tracing and analysing this transition. One of the reasons why I’m so interested in it is because of the pivotal role played by the food industry in creating consumers.
Given the dire state of the average American diet, it probably comes as no surprise to learn that the United States was the first country to witness the rise of a food industry reliant on consumers who had begun to buy an increasing number of good produced in factories by big food companies towards the end of nineteenth century. Consumerism is inextricably linked to the industrialisation of food production.
The first people to benefit from the Industrial Revolution were the middle classes. In Britain, Europe, America and elsewhere, the newly-wealthy bourgeoisie could afford to buy more food, and employed more servants to prepare it. They had leisure in which to enjoy the eating of this food – and it became a way of marking newly-acquired middle-class status.
Until 1850 in Europe, and 1830 in the US, the diets of the urban poor actually deteriorated. The average height of working-class people living in the rapidly expanding cities of the industrialised world actually declined – one of the most potent indicators of the levels of deprivation experienced by this new proletariat. This was the first generation of workers to be disconnected from food production: these were people who no longer grew their own food, and were dependent on inadequate and expensive food systems to supply towns and cities. Poor diets were centred around starches and cheap, poor-quality food.
But from the mid-nineteenth century onwards, food became progressively cheaper, more plentiful, and varied – and this happened earlier and more quickly in the United States. So what caused this drop in price and greater availibility in cities? A revolution in transport made it easier to take produce from farms to urban depots by rail, and shipping brought exotic fruit and vegetables from the rest of the world to Europe and the United States. When Europe’s grain harvest failed during the 1870s, the continent was fed with wheat imported by steam ship from Canada. Farmers now began to cultivate land which had previously been believed to be inaccessible – and to grow market-oriented produce. The rise of the iceberg lettuce – which could cope with being transported over vast distances with little bruising – is directly attributable to this.
The agricultural revolution of the eighteenth century made farming more productive. New systems of crop rotation, the use of higher-yielding plant hybrids and improved implements, and the enclosure movement in Britain meant that fewer farmers were producing more food than ever before. And this produce was processed far more quickly, and cheaply. With innovations in the preservation of food through refrigeration, bottling, and canning, food could be transported over greater distances, but also, and crucially, manufactured in larger quantities and then kept before distribution on a mass scale.
Food companies began to control nearly every aspect of the newly industrialised food chain: businesses like Heinz formed alliances with farmers and transportation companies which supplied their factories with meat, fruit, and vegetables. Increasingly, they also began to advertise their products. The rise of these ‘food processors’, as they’re often called, caused a fundamental change in the way in which people ate. Most Americans began to eat similar diets based around processed food produced in factories.
Americans weren’t, of course, compelled to eat processed food. They did so for a number of reasons. Factory-baked bread, tinned vegetables, and processed meat were cheap, easy to prepare, and, importantly, believed to be free from contamination and disease. But with most people’s basic nutritional and calorific needs now met, food processors began to use advertising and brands to a far greater extent to encourage customers – dubbed ‘consumers’ – to buy more and that which they didn’t need. Susan Strasser explains:
Formerly customers, purchasing the objects of daily life in face to-face relationships with community-based craftspeople and store keepers, Americans became consumers during the Progressive Era. They bought factory-produced goods as participants in a complex network of distribution – a national market that promoted individuals’ relationships with big, centrally organised, national-level companies. They got their information about products, not from the people who made or sold them, but from advertisements created by specialists in persuasion. These accelerating processes, though by no means universal, had taken firm hold of the American way of life.
Food processors needed to persuade consumers to buy their products, and in greater quantities:
People who had never bought cornflakes were taught to need them; those once content with oats scooped from the grocer’s bin were told why they should prefer Quaker Oats in a box. Advertising, when it was successful, created demand…. Advertising celebrated the new, but many people were content with the old. The most effective marketing campaigns encouraged new needs and desires…by linking the rapid appearance of new products with the rapid changes that were occurring in all areas of social and cultural life.
We have always attached a variety of meanings to food, but within a consumer society, the decisions we make about what to buy and eat are shaped to a large extent by the desires and needs manufactured by a massive advertising industry.
The industrialisation of food production has, as I noted last week, allowed more people to eat better than ever before. But this has come at a cost: we know that many food companies engage in ecologically unsustainable practices, mistreat their employees, hurt animals, and occasionally produce actively harmful food. Moreover, it was part of a process which transformed people from customers into consumers – into individuals whose happiness is linked to what and how much they buy. This does not make us happy – nor is it environmentally or economically sound. Justin Lewis writes:
the promise of advertising is entirely empty. We now have a voluminous body of work showing that past a certain point, there is no connection between the volume of consumer goods a society accumulates and the well-being of its people.
The research shows that a walk in the park, social interaction or volunteering – which cost nothing – will do more for our well-being than any amount of ‘retail therapy’. Advertising, in that sense, pushes us towards maximising our income rather than our free time. It pushes us away from activities that give pleasure and meaning to our lives towards an arena that cannot – what Sut Jhally calls ‘the dead world of things’.
As customers were made consumers, so it is possible for us to change once again. How we are to achieve this, though, is difficult to imagine.
Texts quoted here:
Harvey A. Levenstein, Revolution at the Table: The Transformation of the American Diet (New York: Oxford University Press, 1988).
Susan Strasser, Customer to Consumer: The New Consumption in the Progressive Era,’ OAH Magazine of History, vol. 13, no. 3, The Progressive Era (Spring, 1999), pp. 10-14.
Warren Belasco and Philip Scranton (eds.), Food Nations: Selling Taste in Consumer Societies (New York: Routledge, 2002).
Jack Goody, ‘Industrial Food: Towards the Development of a World Cuisine,’ in Cooking, Cuisine, and Class: A Study in Comparative Sociology (Cambridge: Cambridge University Press, 1982), pp. 154-174.
Roger Horowitz, Meat in America: Technology, Taste, Transformation (Baltimore: Johns Hopkins University Press, 2005).
Tim Jackson, Prosperity without Growth: Economics for a Finite Planet (London: Earthscan, 2009).
Nancy F. Koehn, ‘Henry Heinz and Brand Creation in the Late Nineteenth Century: Making Markets for Processed Food,’ The Business History Review, vol. 73, no. 3 (Autumn, 1999), pp. 349-393.
Rebecca L. Spang, The Invention of the Restaurant: Paris and Modern Gastronomic Culture (Cambridge: Harvard University Press, 2000).
Peter N. Stearns, ‘Stages of Consumerism: Recent Work on the Issues of Periodisation,’ The Journal of Modern History, vol. 69, no. 1 (Mar., 1997), pp. 102-117.
Susan Strasser, ‘Making Consumption Conspicuous: Transgressive Topics Go Mainstream,’ Technology and Culture, vol. 43, no. 4, Kitchen Technologies (Oct., 2002), pp. 755-770.
Lorine Swainston Goodwin, The Pure Food, Drink, and Drug Crusadors, 1879-1914 (Jefferson: McFarland & Co., 1999).
Frank Trentmann, ‘Beyond Consumerism: New Historical Perspectives on Consumption,’ Journal of Contemporary History, vol. 39, no. 3 (Jul., 2004), pp. 373-401.
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
On Saturday I was part of Cape Town’s SlutWalk. A local manifestation of a global movement which emerged in response to a Toronto policeman’s daft comments about rape and women’s ‘slutty’ choice of clothes in January this year, Cape Town’s SlutWalk was a resounding success. It was the most fun, friendly, and good natured march I’ve ever been on. According to the Mail & Guardian and – hurrah! – the Washington Post, about 2,000 people marched from Prestwich Memorial to Green Point stadium. I really was impressed by the numbers of men there, and by the range of ages represented by the marchers. (This is my report for FeministsSA.)
The posters were brilliant, and people came dressed in ball gowns, angel wings, bunny ears, leotards, jeans and t-shirts, fishnets and thigh-high boots, and (almost) nothing at all. In many ways, it was a typically Capetonian event: we gathered outside hip Truth Coffee beforehand, and the march began half an hour late. It was also overwhelmingly middle-class and, really, for an anti-rape protest to make any sense in Cape Town, it should have been in Khayelitsha or Manenberg.
But I don’t want to detract from the success of the event. In particular, I hope that it’ll prove to be the basis for a campaign against street harassment. SlutWalk is, inadvertently, a protest against the constant low-level harassment of women in public spaces. I was, though, deeply unsettled by the vitriol aimed at SlutWalk when it was announced that South African marches were in the offing. Commentators on SlutWalk Cape Town’s Facebook page accused the organisers of being irresponsible, stupid, and of contributing to – rather than solving – the problem of victim blaming.
If anything, those remarks demonstrated the extent to which women are still held responsible for rape. One particularly unpleasant contributor insisted that only one per cent of all reported rapes are ‘genuine’ – the rest, he alleged, are simply made up by women. What many of these angry men (and they were mainly men) had in common was a fear of a group of scantily-clad women marching together in public: a belief that the amount of naked flesh on display would have – alas undefined – catastrophic ramifications for the women on the march.
Another commentator explained that she opposed the event because she prefers women to ‘have a little mystery’ about them. Unfortunately, she didn’t specify if this was to be achieved by wearing false moustaches, speaking in strange foreign accents, or investing in trench coats.
Women’s bodies, argue the anti-Slutwalk brigade, need to be covered and contained. Because female nakedness is usually sexualised, it’s seen as excessive, dangerous, and disruptive. Clothing is, then, one way of controlling women in patriarchal societies. We are told to cover ourselves up for our own good – because our bodies exercise too powerful an influence over terminally suggestible, weak-willed men.
Food is another means of exercising control over women. As I’ve written in the past, the current vogue for cupcakes is partly the product of the fact that they are the acceptable face of feminine eating: they’re small, childlike (indeed, they’re children’s party food), and pretty – like the women who are supposed to eat them. (I should like to add, for the record, that after SlutWalk, my friends and I picnicked and feasted on cheesecake, samoosas, egg sandwiches, naartjies, as well as breast-shaped cupcakes.)
This link between women’s diet and the control of their bodies can be traced to the eighteenth century. A few weeks ago, I mentioned the influential Enlightenment physician George Cheyne (1671-1743), whose writing on health and eating was not only extraordinarily popular among the English upper classes, but was also partly responsible for a shift in the understanding of the ideal physical form during the 1750s. Partly as a result of Cheyne’s own obesity, he associated excess flesh with excessive behaviour and a kind of moral laxity. Whereas before, fleshiness had been a sign of good health, increasingly slimness was associated with physical and moral health, strength, and beauty.
Cheyne’s audience and the patients whom he treated at his fashionable practice in even more fashionable Bath, were primarily female. In a society where eating meat had long been associated with masculinity – and this had even deeper roots in the ancient humoral system which associated meat and spicy food with the blood, the most ‘manly’ of the four humors – Cheyne advocated the renunciation of all meat, and the adoption of a dairy-rich, vegetarian diet. Men, in other words, needed to eat like women.
During this period, the female body was slowly being reconceptualised as being more delicate – more easily upset – than the male body, and also ruled by the unpredictable emotions, rather than the rational, sober intellect. Although gendered, this emotions-intellect binary did not necessarily privilege the one over the other: the Romantic cult of sensibility celebrated the emotional and irrational, for example. But male and female bodies – or, more accurately, middle-class male and female bodies – needed to be fed differently.
Cheyne was unusual in his implacable opposition to meat-eating, but he and other physicians were united in the belief that a moderate diet was essential for good health – and this was particularly important for women. Cheyne became interested in the ‘nervous’ complaints which seemed to plague his female patients, and connected their diet to their psychological well-being. Essentially, the less women ate, the better. Anita Guerrini explains:
Cheyne’s audience, the aristocracy and new merchant class that frequented Bath, was also the audience for William Law’s exhortations in his popular devotional work A Serious Call (1728). He provided contrasting models of female character in the ‘maiden sisters’ Flavia and Miranda, who ‘have each of them two hundred pounds a year,’ a comfortable middle-class income. While Flavia spent her income on clothes, luxurious foods, sweetmeats, and entertainment, the ascetic Miranda ate only enough to keep herself alive and spent her income on charity. Miranda, said Law, ‘will never have her eyes swell with fatness, or pant under a heavy load of flesh;’ such excess flesh was not only morally depraved, it was physically disgusting. Cheyne’s patients, like the doctor himself, grew in spirit as they wasted in flesh.
During the 1720s, Catherine, the adolescent daughter of British Prime Minister Robert Walpole, was referred to Cheyne because of his specialisation in nutrition and nervous diseases. She suffered from loss of appetite, fainting, and chronic pain, and died in 1722 aged eighteen. Cheyne tried his best to treat her, but could not find a way of making her eat more.
This association of femininity – of physical and moral beauty – and not eating persisted into the nineteenth century and, I would suggest, into the present. Even though we have records which indicate that people, and particularly young women, have purposefully starved themselves to death since the Middle Ages and usually for religious reasons, anorexia nervosa was isolated as a specific ailment by William Withey Gull (1816-1890) in a paper he presented to the Clinical Society of London on 24 October 1873. He argued that this ‘peculiar form of disease occurring mostly in young women, and characterised by extreme emaciation’ was not a symptom of the catch-all feminine disorder ‘hysteria’, but a separate condition with its own symptoms and treatment.
As Joan Jacobs Brumberg notes, this identification of anorexia nervosa occurred within a wider cultural concern about the phenomenon of ‘fasting girls’: young, adolescent women who denied themselves food on religious grounds. Sarah Jacob from Wales claimed that her piety was such that she was able to live without eating.
Some British doctors regarded Sarah Jacob’s claim to total abstinence as a simple fraud and, therefore, an affront to science… Consequently, they called for a watch, with empirical standards, which deprived the girl of all food and, not surprisingly, killed her within 10 days because she was already severely undernourished. Some British doctors attributed Sarah Jacob’s condition to girlhood hysteria, provoked by religious enthusiasm and her celebrity status.
In other words, girls’ decision to starve themselves moved from the realm of religion or mysticism, to science and medicine. It was a disorder which could be described and treated. For example, the French psychiatrist Charles Lasegue (1816-1883) suggested that anorexia should be treated by examining the dynamics of middle-class family class. He
noted the difficult relation between anorectics and their parents but went on to elaborate how the girl obsessively pursued a peculiar and inadequate diet-such as pickled cucumbers in cafe au lait – despite the threats and entreaties of her anxious parents. ‘The family has but two methods at its service which it always exhausts,’ he wrote, ‘entreaties and menaces …. The delicacies of the table are multiplied in the hope of stimulating the appetite, but the more solicitude increases the more the appetite diminishes’.
This shift was due to the increasing medicalisation of the body, and also the secularisation of public life. By the 1870s, doctors exercised the same – or even more – authority as ministers. But what had not changed over the course of eighteenth and nineteenth centuries was the association of femininity with eating very little.
Anorexia is caused by a range of factors, but the connection of ideal femininities with eating a restricted diet only exacerbates the condition. As rape isn’t really about sex, so anorexia isn’t entirely about food: it’s a manifestation of (mainly, but not exclusively) women’s attempts to exercise control over their circumstances through their bodies. Because of the wider, cultural approval of feminine thinness and not eating, these starving young women receive a kind of affirmation for their self-denial.
It’s easy to talk glibly about encouraging a ‘positive attitude’ towards food and eating. We can only achieve this when we acknowledge that women’s bodies are still perceived as dangerous – as needing to be contained by their clothes, kept pure by a range of hygiene products, and made small through dieting and exercise. This is why we still need feminism. In South Africa – where the ANC Women’s League and Lulu Xingwana‘s Department of Women, Children, and Disabled Persons have shown a singular lack of enthusiasm for leading a feminist movement – I hope that SlutWalk represents the beginnings of a new, stronger feminism.
Texts cited here:
Joan Jacobs Brumberg, ‘“Fasting Girls”: Reflections on Writing the History of Anorexia Nervosa,’ Monographs of the Society for Research in Child Development, vol. 50, no. 4/5, History and Research in Child Development (1985), pp. 93-104.
Anne Charlton, ‘Catherine Walpole (1703-22), an Eighteenth-Century Teenaged Patient: A Case Study from the Letters of the Physician George Cheyne (1671 or 73-1743),’ Journal of Medical Biography, vol. 18, no. 2 (May 2010), pp. 108-114.
Anita Guerrini, ‘The Hungry Soul: George Cheyne and the Construction of Femininity,’ Eighteenth-Century Studies, vol. 32, no. 3, Constructions of Femininity (Spring, 1999), pp. 279-291.
Erin O’Connor, ‘Pictures of Health: Medical Photography and the Emergence of Anorexia Nervosa,’ Journal of the History of Sexuality, vol. 5, no. 4 (Apr., 1995), pp. 535-572.
Roy Porter, Flesh in the Age of Reason: How the Enlightenment Transformed the Way We See Our Bodies and Souls (London: Penguin,  2004).
Martha J. Reineke, ‘“This Is My Body”: Reflections on Abjection, Anorexia, and Medieval Women Mystics,’ Journal of the American Academy of Religion, vol. 58, no. 2 (Summer, 1990), pp.245-265.
Edward Shorter, ‘The First Great Increase in Anorexia Nervosa,’ Journal of Social History, vol. 21, no. 1 (Autumn, 1987), pp. 69-96.
I. de Garine, Food, Diet, and Economic Change Past and Present (Leicester: Leicester University Press, 1993).
Sander L. Gilman, Fat: A Cultural History of Obesity (Cambridge: Polity, 2008).
Harvey A. Levenstein, ‘The Perils of Abundance: Food, Health, and Morality in American History,’ in Food: A Culinary History from Antiquity to the Present, eds. Jean-Louis Flandrin and Massimo Montanari, English ed. by Albert Sonnenfeld (New York: Columbia University Press, 1999), pp. 516-529.
Harvey A. Levenstein, Revolution at the Table: The Transformation of the American Diet (New York: Oxford University Press, 1988).
Susie Orbach, ‘Interpreting Starvation,’ in Consuming Passions: Food in the Age of Anxiety, eds. Sian Griffiths and Jennifer Wallace (Manchester: Mandolin, 1998), pp. 133-139.
Kerry Segrave, Obesity in America, 1850-1939: A History of Social Attitudes and Treatment (Jefferson, NC,: McFarlane, 2008).
Peter N. Stearns, Fat History: Bodies and Beauty in the Modern West (New York: New York University Press, 1997).
Doris Wit, Black Hunger: Food and the Politics of US Identity (New York and Oxford: Oxford University Press, 1999).
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
I’ve had an explosively sneezy cold this week, but with bed rest and pain killers to help me to sleep, I’m almost well again. (Unfortunately, my Head of Department remains unconvinced by my theory that I’ve been suffering from a bad allergy to undergraduate lecturing.) I really don’t see the point of taking anti-cold medication. It certainly won’t get rid of the bug, and the only time I’ve ever taken tablets for a cold – just before a long flight home from Paris – I hallucinated so badly that I thought it best never to repeat the experience. Taking it easy, avoiding dehydration, and being generally sensible seem to work every time. I’ve also had a range of advice about what I should eat: vitamin C supplements, garlic, zinc, lemon, and ginger. I’ve managed to consume nearly all of these over the past few days (although not at the same time), and – who knows? – maybe they made a difference.
We know that our diet influences our health. We know that the better we eat, the stronger our immune systems are and the longer we’ll live. It’s for this reason that many seem to believe that it’s possible to eat ourselves well: that we can both prevent and cure illnesses by eating some things, and avoiding others. I was struck forcibly by the strength of this thinking when I saw that Gwyneth Paltrow wrote a recipe book partly because she believed that her father’s eating habits caused the cancer which killed him. No, I am not completely mad, and, yes, I do realise that, at best, Paltrow can be described as a ray of ‘demented sunshine’, but this is an enormously popular and influential woman who really does think that had her father eaten more brown rice, he wouldn’t have had cancer – or, at least, wouldn’t have died from it.
There’s a logic to this thinking: if we eat pure, wholesome food, then, surely, we should be healthy and strong. The problem is that it’s difficult to define what is ‘pure’, ‘wholesome’, and ‘good’ food. However much nutritionists may dress up their work as ‘science’, we don’t know precisely what diet is best for our health. In the past few weeks new studies have demonstrated that drinking eight glasses of water and eating five portions of fruit and vegetables per day…will have very little effect on us at all. Oh, and vitamin supplements and probiotics are of dubious value too. It’s certain that we should eat plenty of fruit and vegetables and lessen our intake of red meat and saturated fat, but everything else remains guesswork. That study about Omega 3 supplements and children’s brains? It was nonsense. As is the advice sprouted by Patrick Holford. So, no, drinking green tea and eating mung beans and quinoa will not stave off cancer. (Sorry.) The amazing people at Information is Beautiful have provided a helpful visualisation of the relative benefits of dietary supplements (see here for a bigger and pleasingly animated version):
Our ideas around healthy diets have changed over time, and are inflected by a range of factors, including current debates in science and medicine, the interests of industry and food lobbies, and religious belief. In his magnificent study Flesh in the Age of Reason: How the Enlightenment Transformed the Way We See Our Bodies and Souls (2003), Roy Porter traces a shift in thinking about health and eating during the mid-eighteenth century. He argues that during the early modern period, stoutness and eating heartily – if not in excess – were seen as signs of good health. In Britain, a taste for roast beef was also connected to support for an incipient national ‘English’ consciousness.
But from the 1750s onwards, physical beauty was associated more frequently with slimness. (Compare, for example, portraits by Rubens and Constable.) Enlightenment bodies needed also to be fed in restrained, rational ways. One of the most popular prophets of the new eating orthodoxy was the physician George Cheyne (1673-1743) who based his views on plain, wholesome eating on his own experience of being morbidly obese. In The English Malady (1733) he argued that ‘corpulence produced derangements of the digestive and nervous systems which impaired not only health but mental stability. … Excess of the flesh bred infirmities of the mind.’ Porter explains:
Cheyne’s call to medical moderation was, however, also an expression of a mystical Christian Platonism trained at the emancipation of the spirit – he can thus be thought of as recasting traditional Christian bodily anxieties into physiological and medical idioms. For Cheyne, the flesh was indeed the spirit’s prison house. Excessive flesh encumbered the spirit; burning it off emancipated it.
Following the teachings of the German mystic Jakob Boehme, he imagined prelapsarian bodies innocently feeding on ‘Paradisiacal Fruits’. After the Fall, the flesh of the newly carnivorous humans had been subjected to the laws of the corruption of matter. …his works aimed at recovering the purity of the prelapsarian body.
Cheyne recommended a vegetarian diet on the grounds that it most closely resembled that eaten in the Garden of Eden. It was, in other words, the diet of spiritual perfection. Much of the success of his writing was due also to rise of a vegetarian movement in Europe during the eighteenth century. These Enlightenment vegetarians argued that it was cruel to slaughter animals merely for food, and also believed that ‘greens, milk, seeds and water would temper the appetite and produce a better disciplined individual.’
There has long been an association between corpulence and moral or spiritual laxity, and thinness with (self-) discipline. But what Cheyne advocated went further than this: he argued that rational individuals were partly responsible for their own ill-health because they could choose what they ate. Moreover, because he connected eating meat with sinfulness, deciding what to eat was also a moral choice.
Cheyne’s thinking proved to be remarkably durable. In the late nineteenth century, left-leaning social reformers promoted vegetarianism as the best example of ethical consumerism. Vegetarianism was healthy and it did not – they believed – cause the needless sacrifice of animals (although they didn’t address what happened to the bull calves and billy goats produced by lactating cows and nanny goats). In Sheila Rowbotham’s magnificent biography of the immensely influential socialist writer Edward Carpenter (1844-1929), she describes how Carpenter’s dictum of simple living took hold among the members of the Fellowship of the New Life, the forerunner of the Fabian Society. Carpenter agued for simple clothing, simple houses, and simple food:
Carpenter combined his evangelical call for a new lifestyle with an alternative moral economy. This recycled, self-sufficient praxis involved growing your own vegetables, keeping hens and using local not imported grain – American produce was forcing down British farmers’ prices.
But this met with some resistance. The physician and social reformer Havelock Ellis
protested against Carpenter’s advocacy of vegetarianism on the grounds that meat was a ‘stimulant’. Ellis wanted to know why meat? Why not potatoes? Was not all food a stimulant?
I’m with Ellis on this one.
The food counterculture of the 1960s embraced vegetarianism and an enthusiasm for ‘whole foods’ as a manifestation of a way of living ethically and sustainably. Last week I discussed Melissa Coleman’s memoir of her childhood on her parents’ homestead in rural Maine during the early seventies. Her father, Eliot Coleman, is dubbed the father of the American organic movement, and he fed his growing family mainly from the garden he soon established. They supplemented their diet with bought-in grains, seeds, honey, nut butters, and oils, but were strictly vegetarian. Their role models, Helen and Scott Nearing, were highly critical of immoral ‘flesh eaters’. Their book, Living the Good Life (1954), which became the homesteading Bible, argued that it was possible to feed a family on produce grown organically. Again, the choice of what to eat was a moral one. Eliot and Sue Coleman believed that their diet guaranteed their good health:
Papa often quoted Scott’s sayings, ‘Health insurance is served with every meal.’ As Papa saw it, good food was the secret to longevity and well-being that would save him from the early death of his father. The healthily aging Nearings were living proof that a simple diet was the key.
But, as Melissa Coleman notes, this was not a diet that suited everyone. The family suffered from a lack of Vitamin B, and at times they simply didn’t eat enough. It also didn’t prevent Eliot from developing hyperthyroidism.
His heart seemed to beat too quickly in his chest, and he had a cold he couldn’t kick, despite gallons of rose-hip and raspberry juice. … He tried to make sense of things in his mind. Health insurance, he believed, was on the table at every meal. In other words, the best way to deal with illness was to invest in prevention – eating a good diet that kept the body healthy. … He’d read up on vitamins and minerals, learning which foods were highest in A, B, C, D, and minerals like calcium, magnesium, and zinc. He drank rose-hip juice for vitamin C, ate garlic and Echinacea to build immunity, used peppermint and lemon balm tea to soothe the stomach, and used chamomile to calm the nerves, but perhaps all this wasn’t enough.
She concludes: ‘He never thought to question the vegetarian diet espoused by the Nearings.’
I don’t – obviously – want to suggest that vegetarianism is deadly. Rather, my point is that the choices we make about our diets are influenced as much – or even more – by a set of assumptions about morality, our responsibility for our health, and other beliefs as they are by information about the nutritional benefits of food. I am concerned by two aspects of this belief that we are somehow able to eat ourselves better. We need to acknowledge that what we eat will not prevent us from falling ill. Sickness is caused by many things, and although important, diet is not an overriding factor.
Eat food. Not too much. Mostly plants. That, more or less, is the short answer to the supposedly incredibly complicated and confusing question of what we humans should eat in order to be maximally healthy.
This won’t make terribly much money for nutritionists or the food industry, hence their interest in promoting things which, they suggest, will do miraculous things for our health. They almost certainly won’t. Unless you suffer from an ailment which needs to be treated with a special diet, deciding what to eat is not a complicated, mysterious process. No amount of goji berries will make you a healthier, happier, or better person.
Texts quoted here:
Melissa Coleman, This Life is In Your Hands: One Dream, Sixty Acres, and a Family Undone (New York: Harper, 2011).
Roy Porter, Flesh in the Age of Reason: How the Enlightenment Transformed the Way We See Our Bodies and Souls (London: Penguin  2004).
Warren Belasco, Meals to Come: A History of the Future of Food (Berkeley: University of California Press, 2006).
Philip Conford, The Origins of the Organic Movement (Edinburgh: Floris Books, 2001).
Harvey Levenstein, Paradox of Plenty: A Social History of Eating in Modern America, revised ed. (Berkeley: University of California Press, 2003).
Colin Spencer, The Heretic’s Feast: A History of Vegetarianism (Lebanon: University Press of New England, 1996).
Tristram Stuart, The Bloodless Revolution: Radical Vegetarians and the Discovery of India (London: Harper Press, 2006).