Skip to content

Posts tagged ‘nutrition’

Ideal Conditions

Earlier this month it was announced that the sport scientist turned diet guru Tim Noakes is in talks with Derek Carstens, former First Rand executive and now Karoo farmer, about improving the diets of farm workers. The Cape Times reported:

Once the project begins, the families on the farm will be monitored for five to 10 years. With a diet high in offal – which is readily available in the farmlands of the Karoo – the families will stop consuming carbohydrates, which Noakes says are of no benefit to the human body.

‘This is an ideal set-up,’ said Noakes. ‘And it would be much harder to do research of this nature in a place like Cape Town.’

Since the emergence of nutrition as a field of scientific enquiry in the early twentieth century, the poor, the hungry, and the socially and politically disenfranchised have often been the subjects of research into diet and malnutrition. Last year, University of Guelph-based food historian Ian Mosby published evidence that during the 1940s and 1950s, scientists working for the Canadian government conducted a series of experiments on malnourished residents of rural Aboriginal communities and residential schools.

Rural impoverishment in the 1930s – brought about by the decline in the fur trade and cuts to government provision of poor relief – meant that First Nations people struggled to find enough to eat. They could not, in other words, afford to eat, and this knowledge informed the advice they provided to researchers for eradicating malnutrition. Mosby writes:

Representatives of the various First Nations visited by the research team proposed a number of practical suggestions for ending the hunger and malnutrition in their communities. In addition to more generous relief during times of extreme hardship, these included increased rations for the old and destitute, timber reserves to be set aside for the building and repairing of houses, and additional fur conservation efforts by the federal government, as well as a request that they be given fishing reserves ‘so that they could get fish both for themselves and for dog feed, free from competition with the large commercial fisheries.’

However, researchers decided to set up an experiment in which First Nations peoples were provided with vitamin supplements to gauge their relative effectiveness in combating the side effects of hunger. Crucially, researchers were well aware that ‘vitamin deficiencies constituted just one among many nutritional problems.’ In fact, they calculated that the average diet in these communities provided only 1,470 calories per person during much of the year.’ First Nations people needed food supplies, not vitamin supplements. Mosby concludes:

The experiment therefore seems to have been driven, at least in part, by the nutrition experts’ desire to test their theories on a ready-made ‘laboratory’ populated with already malnourished human ‘experimental subjects.’

In other areas, researchers regulated what kinds of food Aboriginals could purchase with their welfare grants (the Family Allowance):

These included canned tomatoes (or grapefruit juice), rolled oats, Pablum [baby food], pork luncheon meat (such as Spork, Klick, or Prem), dried prunes or apricots, and cheese or canned butter.

This experiment was also an attempt to persuade First Nations people to choose ‘country’ over ‘store’ foods. They were to hunt and to gather instead of relying on shops. To these ends, some officials tried to prevent some families from buying flour:

In Great Whale River, the consequence of this policy during late 1949 and early 1950 was that many Inuit families were forced to go on their annual winter hunt with insufficient flour to last for the entire season. Within a few months, some went hungry and were forced to resort to eating their sled dogs and boiled seal skin.

Perhaps unsurprisingly, there is little or no evidence to suggest that the subjects of these research projects consented to being part of them.

In South Africa, anxiety about the productivity of mine workers in the 1930s drove the publication of a series of reports into the health of the African population. Diana Wylie explains:

The Chamber of Mines in particular was alarmed at the 19 per cent rejection rate for Transkei mine recruits. Some of the researchers urged the government to concern itself with nutritional diseases ‘as an economic problem of first importance in which not merely the health but the financial interests of the dominant races are concerned.’ Another warned, ‘unless a proper food supply is assured, our biggest asset in the Union, next to the gold itself, our labour supply, will fail us in the years to come.’

In response to these findings, mining companies introduced supplements to miners’ diets to combat scurvy and generally boost immune systems. They did not, obviously, address the causes of miners’ ill health and poor diets – which were partly the impoverishment of rural areas and the system of migrant labour.

Mine workers in Kimberley.

Mine workers in Kimberley. (From here.)

The Canadian experiments and South African research projects were produced by a similar set of concerns: by an interest in civilising indigenous people, but also because, in the case of Canada, ‘it [was their] belief that the Indian [sic] can become an economic asset to the nation.’ Africans also needed to be well fed and kept healthy for the benefit of the South African state.

Noakes is correct when he says that conducting the research he proposes to do on rural farm workers would be almost impossible in a city. Although he insists that he will seek ethics approval, I wonder how he and other researchers will go about winning the informed consent of a group of people who are dependent on their employer – Noakes’s collaborator – for their livelihoods, and who have, historically, very low levels of education.

Also, Noakes seems to believe that only carbohydrates are at the root of farm labourers’ poor diets. As the First Nations people referred to above argued, malnutrition is caused by an inability to access good, nutritious food – and usually because of low wages. Instead of feeding Carstens’s employees offal, it might be worth considering how much they are paid, and how easy it is for them to afford transport to shops selling healthy food.

Noakes argues that ‘We can’t build this nation in the absence of sufficient protein and fat.’ To what extent is this project purely for the benefit of Karoo farm workers? And to what extent to prove a controversial theory proposed by a prominent researcher?

Sources

Ian Mosby, ‘Administering Colonial Science: Nutrition Research and Human Biomedical Experimentation in Aboriginal Communities and Residential Schools, 1942–1952,’ Histoire Sociale/Social History, vol. 46, no. 91 (May 2013), pp. 145-172.

Diana Wylie, ‘The Changing Face of Hunger in Southern African History, 1880-1980,’ Past and Present, no. 122 (Feb. 1989), pp. 159-199.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Body Knowledge

Over the past month I’ve helped to organise the first major conference on the medical humanities in southern Africa. Titled Body Knowledge, and hosted by the Wits Institute of Social and Economic Research earlier this week, the conference brought together a disparate group of academics from all over the world, and specialising in a variety of disciplines: from anthropology to nursing, and from epidemiology to cultural studies.

I think we pulled it off too. It was certainly a fun conference: the food was excellent, I learned a great deal – about my own area of specialisation and others – and was surprised by how frequently papers presented from wildly different disciplines spoke to each other in interesting and quite thought-provoking ways. We’re keen for this new field of the medical humanities not only to encourage collaboration and contact between the arts, humanities, social sciences, and natural and medical sciences, but also to be communicated to those outside of the academy. The Mail and Guardian has published a collection of articles drawn from the conference:

Catherine Burns and Ashlee Masterson discuss art, science, the medical humanities and the Body Knowledge conference.

Susan Levine and Steve Reid argue for the significance of the arts in the practice of medicine.

Jane Taylor looks at the long history of collaboration between the arts and medical sciences.

Patrick Randolph-Quinney describes how art has enriched anatomy.

Raimi Gbadamosi examines the social and cultural construction of albinism.

Because I was part of the group keeping the conference running, I didn’t attend many sessions, but I did get to the two on medicine and nutrition (partly because I chaired one of them). The first included two papers: Kristen Ehrenberger (who’s enrolled for both a medical degree and a PhD in History at the University of Illinois) discussed how Germans’ ideas around ‘healthy’ food shifted during the Allied blockade of the First World War; and Thomas Cousins, an anthropologist at Stellenbosch University, argued for the necessity of a cultural and social study of the gut.

In the second session, Louise Vincent and Chantelle Malan from Rhodes University questioned the ways in which the obesity ‘crisis’ has been framed by both medical professionals and organisations with a vested interest in controlling people’s weight. Similarly, Michelle Pentecost – a medical doctor who’s completing an MA in anthropology at Oxford – pointed out that obesity is caused less by individual sloth and lack of self-control, than is the outcome of a complex set of social, political, and economic processes which shape people’s health over long periods of time.

Catherine Burns, my colleague at WiSER, spoke about the need to write histories of breastfeeding in South Africa (she’s written about histories of sex in South Africa too), and Vashna Jagarnath, also of Rhodes, presented a fascinating paper on Gandhi’s shifting attitudes towards diet after his move to London in 1881. She made the point that his embrace of vegetarianism was the product of an association with the Vegetarian Society and with the social and political radicals, many of them early supporters of Indian nationalism, who were part of the Society (people like Annie Besant, for instance). He went on to publish extensively on how best to eat, and, as Vashna noted, his views on diet were increasingly tangled with his politics.

Body Knowledge Poster HR copy

Although on a wide range of subjects, these papers made a few key points: people’s decisions what and how to eat are shaped by a variety of factors, only some of which they are aware; food and nutrition are always political (they are implicated in the ways in which power functions); and our ideas about what is ‘good’ to eat have changed – and are changing – over time and place. In other words, there was no particular moment, or there is no specific place, where people ate, or are eating, a ‘perfect’ diet (whatever we may mean by that).

Although histories of food and nutrition have attracted scant attention from southern African scholars, the field is growing, both in size and prominence, internationally. I think the best indicator of its growing academic respectability is the fact that the theme of this year’s Anglo-American Conference at the Institute of Historical Research in London, was Food in History.

Histories of food have the potential to descend into a kind of pedantic, irritating antiquarianism, but they are also crucial to understanding histories of consumerism, agriculture, the body, and medicine. Indeed, anyone interested in medical histories of childhood has to focus on the significance of nutrition in efforts to improve children’s health during the early twentieth century.

We are – and have been for a very long time – what, how, and why we eat.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Bread Lines

Most of my friends went slightly mad as they finished their PhD dissertations; some cried compulsively, another forgot to eat, and I knew a couple who never wore anything other than pyjamas for months on end. My lowest ebb came when I developed a mild addiction to The Archers, a daily, fifteen-minute soap on Radio 4, featuring the activities of a large, extended family in the fictional village of Ambridge.

Described by Sandi Toksvig as ‘a memorable theme tune, followed by fifteen minutes of ambient farm noise and sighing,’ The Archers was created in 1950 as a kind of public information service: the BBC collaborated with the Ministry of Agriculture, Fisheries, and Food to broadcast information about new technologies and methods to farmers during a period when Britain was trying to increase agricultural productivity.

The series still has an agricultural story editor, and there’s at least one fairly awkward moment in each episode when Ruth Archer discusses milking machines, or Adam Macy mulls over the relative benefits of crop rotation. But its appeal lies now in its human drama. It’s been criticised – rightly – for avoiding complex or uncomfortable social issues, but, recently, it’s featured an excellent storyline involving the series’ poorest family, the Grundys.

Struggling with cuts in benefits and reduced wages, Emma Grundy runs out of money and takes refuge in a food bank, where she and her daughter are given a free lunch. In a sense, this thread dramatizes the Guardian’s excellent Breadline Britain Project, which tracks the ‘impact and consequences of recession on families and individuals across the UK.’ The project has demonstrated convincingly that British people are eating worse as they become less financially secure.

One of its most arresting reports argues that Britain is in a ‘nutrition recession’:

Detailed data compiled for the Guardian, which analysed the grocery buying habits of thousands of UK citizens, shows that consumption of fat, sugar and saturates has soared since 2010, particularly among the poorest households, despite the overall volume of food bought remaining almost static. Food experts and campaigners called for government action to address concerns the UK faces a sustained nutritional crisis triggered by food poverty, which is in turn storing up public health problems that threaten to widen inequalities between rich and poor households.

The data show consumption of high-fat and processed foods such as instant noodles, coated chicken, meat balls, tinned pies, baked beans, pizza and fried food has grown among households with an income of less than £25,000 a year as hard-pressed consumers increasingly choose products perceived to be cheaper and more ‘filling’.

Over the same period, fruit and vegetable consumption has dropped in all but the most well-off UK households, and most starkly among the poorest consumers, according to the data.

It’s no wonder that so many columnists have evoked George Orwell’s description of the very poor eating habits of Wigan’s most impoverished residents during the Great Depression in The Road to Wigan Pier (1937). But the use of the term ‘breadline’ harks back to an earlier, and arguably more influential study, Seebohm Rowntree’s Poverty: A Study in Town Life (1901). Rowntree (1871-1954), the son of the philanthropist and chocolate tycoon Joseph (1836-1925), had studied chemistry in Manchester before beginning work as a scientist in the family business in York.

Benjamin Seebohm Rowntree*

But like his father – whose awareness of poverty had been awakened, apparently, by a trip to Ireland during the potato famineRowntree’s encounters with York’s poor led to the first of three studies which he undertook into poverty in York. Inspired partly by Charles Booth’s The Life and Labour of the People (1886), which analysed the lives of London’s poor, in 1899 Rowntree conducted a survey of the working-class population of York. His findings caused a national outcry, as Ian Packer explains:

Poverty: A Study of Town Life (1901)…became an important subject of debate because of its assertion that not only were 28 percent of the total households in York in poverty but nearly 10 percent had incomes so low that they could not keep the members of the family in what Seebohm termed ‘physical efficiency,’ that is, provided with sufficient nutritional food to maintain health.

Rowntree used access to food as a means of gauging poverty, and it is here that he originated the idea of the ‘breadline’. Diana Wylie writes:

Rowntree latched on to food, or, more precisely, its lack, as a convenient and revealing means of measuring socially unacceptable levels of deprivation. He drew an absolute poverty line; below it, people did not earn enough to buy the ‘minimum necessities for the maintenance of merely physical efficiency.’ If working men did not consume 3,500 calories of food energy daily, and women four-fifths that amount, their intelligence became dulled and their stature stunted. This quite pragmatic definition of hunger, the ‘underfeeding’ that would destroy a person’s stamina, served for Rowntree as the index for judging Britain’s social progress.

This and Rowntree’s subsequent two studies of poverty in York, published in 1936 and 1951, became some of the most significant evidence on which arguments for the creation of a British welfare state, were based. Rowntree’s point was that unemployment and low wages – and not bad eating or spending habits – were at the root of working-class poverty. It became, then, the ethical duty of the state to provide the means of freeing the population from the threat of hunger.

There is a direct line between Poverty: A Study in Town Life and the 1942 Beveridge Report, one of the most important documents of the twentieth century, which provided the foundation for Britain’s welfare state. But the influence of Rowntree’s work was felt beyond Yorkshire and the UK. In Starving on a Full Stomach (2001), Diana Wylie demonstrates the impact of the idea of the breadline on social scientists in South Africa during the early twentieth century.

In 1935, Edward Batson, a graduate of the London School of Economics, Beveridge enthusiast, and professor of social science at the University of Cape Town, arrived in South Africa and began work on ‘the first systematic survey of black urban poverty in sub-Saharan Africa.’

By 1938, Batson had surveyed 808 Cape Town households to discover how much they spent on six essential food groups, and compared their diet with the…minimum daily standard recommended in 1933 by the British Medical Association. His figures revealed that half of Cape Town’s Coloured people lived below the poverty datum line.

Like Rowntree

Batson refuted some common social scientific assumptions such as that ignorance determined the poor diets of poor Capetonians, a perspective that, he said, had recently become ‘fashionable.’ … On the contrary, Batson wrote, most people simply could not afford to eat better.

Batson’s research was undertaken in the midst of widespread debates around the founding of a South African welfare state, the underpinnings of which were put in place during the 1920s and 1930s with legislation such as the 1928 Old Age Pensions Act, and the 1937 Children’s Act. But although his work concentrated on black people, the South African welfare state was established largely to benefit whites. Indeed, Jeremy Seekings makes the point that pensions legislation in the 1920s emerged out of concerns about protecting the white (and, to some extent, coloured) ‘deserving’ poor from a perceived black ‘threat.’ This meant that evidence of significant hunger among black people was not a force in the formulation of South African welfare policy, at least before the Second World War.

So whereas Rowntree’s research contributed to the creation of a universal welfare state in Britain, where all people qualified for assistance from the state through the provision of social security payments, and free healthcare and education, in South Africa, welfare was raced: the welfare state was created to protect and to maintain white power, and to entrench racial segregation.

Understanding the origins of the term ‘breadline’ helps us to see the extent to which attitudes towards, and efforts to eradicate, hunger have changed over time, and the ways in which they’re influenced by thinking about race, as well as class. That being hungry and white meant – and means – something different to being hungry and black.

This photograph is from the National Portrait Gallery‘s collection.

Sources

William Beinart, Twentieth-Century South Africa, new ed. (Oxford: Oxford University Press, 2001).

Timothy J. Hatton and Roy E. Bailey, ‘Seebohm Rowntree and the Postwar Poverty Puzzle,’ The Economic History Review, vol. 53, no. 2 (Aug. 2000), pp. 517-543).

Ian Packer, ‘Religion and the New Liberalism: The Rowntree Family, Quakerism, and Social Reform,’ Journal of British Studies, vol. 42, no2 (April 2003), pp. 236-257.

Jeremy Seekings, ‘“Not a Single White Person Should be Allowed to Go Under”: Swartgevaar and the Origins of South Africa’s Welfare State, 1924-1929,’ Journal of African History, vol. 48, no. 3 (Nov. 2000), pp. 375-394.

Diana Wylie, Starving on a Full Stomach: Hunger and the Triumph of Cultural Racism in Modern South Africa (Charlottesville and London: University Press of Virginia, 2001).

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Food Links, 21.11.2012

The lawyers who took on Big Tobacco take on Big Food.

Britain’s nutrition recession.

Pesticides are killing bumblebees.

Obama did best in those states which watch Top Chef.

Improving Kenyan children’s access to good nutrition.

The implications of buying more food from China.

Apple and pear farmers face increasing challenges in Britain.

The myth of breakfast, lunch, and dinner. (Thanks, Lindie and Milli!)

The success of roof-top gardening in Mexico City.

How to eat like the president of the US.

The history of the jaffa orange.

The Twinkie: can it survive? And what are the alternatives?

The New Yorker takes on THAT review of Guy’s American Kitchen in Times Square.

Trish Deseine is excellent on chefs’ egos and why we should eat real.

Why we don’t have to drink eight glasses of water a day.

There are growing tensions around keeping chickens in Brooklyn.

The link between cooking and the evolution of the human brain.

Tan Twan Eng on street food in Penang.

This is incredible: sushi chefs battle sea monsters.

A cultural history of the spoon.

In praise of the English apple.

On Denis Papin.

What to do if your jam doesn’t set.

Nelson Mandela‘s favourite food.

Amazing anatomically-correct cakes.

When is a food truck more than a food truck?

The London restaurant Tube map.

Food-based idioms.

The history of toad-in-the-hole. (Thanks, Deva!)

A cheeseburger made out of leaves.

Fifty Shades of Chicken. (Thanks, Justin!)

Teabag tags.

An attempt to make cinnamon buns.

The chemistry behind food pairings. (Thanks, Raffaella!)

Stop de-seeding tomatoes.

Five $10 dinners.

Which are the best gins?

Cakes throughout American history.

Rothko paintings recreated with rice.

Exploding fraudulent ketchup.

Old Finnish drink labels.

Are food bloggers pushovers?

Are there any decent substitutes for truffles?

The slow spread of Vegemite.

These are courtesy of my mum:

An ancient recipe.

Is the food movement real?

The dinners of old London.

How are hot dogs made?

Toothbutter.

The vast scale of counterfeit food in Italy.

Forensic scientists battle food fraud.

The Story of the Teeth

I was born with comically bad teeth. I have only one wisdom tooth – welded firmly to my jaw – and had multiple permanent teeth for some of my milk teeth, and none for others. (I still have two milk teeth.) That I don’t look like a caricature of a Blackadder-ish wisewoman is down entirely to my parents’ swift removal of me to a brilliant orthodontist who – with the aid of braces, plates, and two operations – gave me a decent set of teeth.

I spent rather a lot of my childhood and adolescence in pain, as my teeth and jaw were cajoled and wired into place. (I must add, though, that my parents provided me with an endless supply of sympathy, and soft, delicious things to eat, as well as plenty to read.) It was partly for this reason that I never understood the outrage that greeted the news of Martin Amis’s decision to spend around £20,000 in fixing his teeth, ending decades of persistent toothache.

Of course, much of the anger about this amount was linked to his lucrative move, in 1995, from the late Pat Kavanagh, the literary agent who helped him to build his career, to Andrew Wylie, causing an acrimonious rift with Julian Barnes, Kavanagh’s husband. Indeed, AS Byatt later apologised to him for having criticised both his dental work and his acceptance of an extraordinarily high advance negotiated by Wylie, explaining that she had had toothache at the time.

In his memoir, Experience (2000), Amis writes evocatively of the hell of toothache: that it seems to be the only manifestation of dull pain which can’t be blocked out or ignored. It demands attention. (Apparently James Joyce and Vladimir Nabokov were fellow martyrs to tooth pain. There is, clearly, a link between toothache and stylistic experimentation.)

It’s no wonder that modern dentistry is usually cited as one of the best reasons against time travel. The dentist Horace Wells (1815-1848) originated the use of nitrous oxide (laughing gas) as an anaesthetic during dental surgery. Wells died – partly as a result of an addiction of chloroform, ironically – before nitrous oxide became the anaesthetic of choice, rather than ether for example, among dentists. In South Africa, I’ve found evidence to suggest that it was possible to have teeth extracted under anaesthetic from around the 1880s – although it’s likely that this was available to wealthier patients before then.

In fact, the state of one’s teeth has been a potent indicator of class difference since at least the nineteenth century. Access to dentists and technology – powders, pastes – to prevent tooth decay meant that the middle and upper classes had better teeth than those who were poor, whose diets tended to feature substantial amounts of tooth-eroding sugar, and whose visits to dentists – who had usually had little or no training – were done only in case of dire emergency.

In the pub conversation described in TS Eliot’s The Waste Land (1922), the speaker refers to a friend, Lil, who worries that her recently demobbed husband will leave her, partly because she had aged so much during the recent Great War:

Now Albert’s coming back, make yourself a bit smart.
He’ll want to know what you done with that money he gave you
To get yourself some teeth. He did, I was there.
You have them all out, Lil, and get a nice set

As false teeth became cheaper and more widely available, it seemed to make better sense to have all one’s teeth out at once, rather than suffer a lifetime’s worth of dental pain.

We attach a wide range of meanings to teeth: from the elongated incisors of vampires, to the whiter-than-white rictus grins of celebrities. My friend Shahpar in Dhaka points out that in south Asia, some Muslims associate oral hygiene using the bark of the miswak tree with holiness, as they believe that the Prophet used the bark to clean his teeth. More generally, people in the region place an exceptionally high value on having a healthy, full mouth of teeth – reflected in some truly appalling jokes.

I’ve been reading about anxieties about oral hygiene and dentistry recently, hence this interest in shifting cultural and social constructions of teeth. During the early decades of the twentieth century, global anxieties about infant mortality and childhood health, resulted in a heightened concern about the care of children’s teeth. This was part of an infant welfare movement which had emerged all over the world at the end of the nineteenth century, in response to unease about high rates of infant mortality (usually as a result of diarrhoea), the apparently failing health of urban working-class men, and eugenicist anxieties about maintaining white control over political, social, and economic power.

Denture Shop, India, 1946*

Although child welfare campaigners during the nineteenth century drew parents’ attention to the need to instil in their children good habits of dental hygiene, the discourse around the state of children’s teeth during the early twentieth century differed. To be fair, rotting teeth and gum disease are the cause of a range of health problems, and it makes sense to direct public health policy towards making dental services freely available.

But particularly during the 1920s and 1930s, preventing poor oral hygiene and tooth decay began to take on moral overtones. Doctors and child welfare activists increasingly understood bad oral health as a signifier of chaotic, ‘unscientific’ upbringings – which, they believed, tended to occur in working-class families. Writing about Major General Sir Frederick Barton Maurice’s influential 1903 study of the large numbers of volunteers who were deemed to be physically unfit to fight in the South African War (1899-1902), Anna Davin explains:

If, as it seemed, these puny young men were typical of their class (‘the class which necessarily supplies the ranks of our army’), the problem was to discover why [they suffered from so many physical ailments], and to change things. Proceeding to speculate on possible explanations, [Maurice] accounted for the prevalence of bad teeth among recruits by unsuitable food in childhood (‘the universal testimony that I have heard is that the parents give the children even in infancy the food from off their own plates’), and decided at once that ‘the great original cause’ (of bad teeth at this point, but subsequently, and with as little evidence, of all the ill-health) was ‘ignorance on the part of the mothers of the necessary conditions for the bringing up of healthy children’.

This was one of several essays and articles which argued that poor nutrition in childhood – most notably feeding babies food meant for adults – caused ‘bad teeth’ and, thus, compromised health in adulthood. The best means of remedying this situation was to encourage mothers (and in the minds of doctors, welfare campaigners, and policy makers, these mothers were inevitably working-class) to adhere to ‘scientific principles’ in raising their children, chief of which was providing babies and young children with a diet calibrated precisely to their needs. These principles and diets were formulated by health professionals – medical men – and they, as well as nurses, health visitors, and others, encouraged mothers to abandon ‘superstitious’ and ‘ignorant’ childrearing practice in favour of properly ‘scientific’ guidelines.

Those doctors and campaigners influenced by eugenics argued, though, that children’s moral character depended on good dental hygiene. (Susanne Klausen explains what we mean by ‘eugenics’: ‘in its broadest definition…eugenics was concerned with improving the qualities of the human race either through controlling reproduction or by changing the environment or both.’) In The Story of the Teeth, and How to Save Them (1935), Dr Truby King, the extraordinarily influential founder of the global mothercraft movement, argued that the health and strength of babies’ and children’s teeth depended, firstly, on the health of the pregnant and lactating mother, and, secondly, on proper nutrition.

Breastfeeding – not on demand, but at regular intervals depending on the age of the baby – was, he believed, the foundation for the development of strong teeth and jaws. The introduction of nutritious food once the baby was six months old should, he wrote, encourage the child to chew, thus stimulating the nerves and blood vessels in the face, causing the milk and permanent teeth to emerge quickly and cleanly.

King had dire warnings to those parents – particularly mothers – who, he suggested, ‘gave in’ to the demands of their babies and children:

Decay of the teeth is not a mere chance unfortunate disability of the day – it is the most urgent and gravest of all diseases of our time – a more serious national scourge than Cancer or Consumption….

Why? Because oral hygiene and healthy teeth ensured that the citizens of the future would be morally good, productive, conscientious individuals:

‘Building the Teeth’ and ‘Forming a Character’ are parts of construction of the same edifice – standing in the relationship of the underground foundations of a building to the superstructure.

Our dentists tell us that nowadays when they insist on the eating of crusts and other hard food [necessary for encouraging the child to chew and, thus, in King’s view, develop its jaw], the mother often says ‘Our children simply won’t!’ Such children merely exemplify the ineptitude of their parents – parents too sentimental, weakly emotional, careless, or indifferent to train their children properly. The ‘can’t-be-so-cruel’ mother who cries half the night and frets all day on account of the mother’s failure to fulfil one of the first of maternal duties, should not blame Providence or Heredity because her progeny has turned out a ‘simply-won’t’ in infancy, and will become a selfish ‘simply-can’t’ in later childhood and adolescence. Power to obey the ‘Ten Commandments,’ or to conform to the temporal laws and usages of Society is not to be expected of ‘SPOILED’ babies when they reach adult life. …

Unselfishness and altruism are not the natural outcome of habitual self-indulgence. Damaged health and the absence of discipline and control in early life are the natural foundations of failure later on – failure through the lack of control which underlies all weakness of character, vice, and criminality.

Good teeth meant good citizens. Bizarre as this thinking may have been, it did – often – have positive outcomes. For instance, similar views held among South African doctors and child welfare campaigners were behind the establishment of a network of dental clinics for poor children – albeit mainly white children – during the 1920s and 1930s. Children whose parents could not afford private dental care, could attend these clinics gratis.

One of the most striking characteristics of eugenicist thinking was its tendency to blame mothers’ ignorance, stupidity, or credulousness for the poor health of their babies and children, ignoring the environmental factors – the contexts – in which they raised their offspring. King’s implication was that mothers were ultimately responsible for the ‘vice and criminality’ of society: if they, he wrote, had simply disciplined their children, feeding them properly and ignoring their demands, then all adults would be productive, self-controlled citizens.

Although King’s reasoning is demonstrably bonkers, this tendency to blame (single) mothers for children’s anti-social behaviour persists, particularly within right-wing political and media circles. This is a strategy which absolves the state and other institutions of any responsibility for ensuring that children are adequately care for.

The study of attitudes towards teeth and dentistry reveals a range of beliefs about parenting, childhood, and, nutrition. It seems, then, that we are not only what we eat, but we are also how we eat.

Sources cited here:

Anna Davin, ‘Imperialism and Motherhood,’ History Workshop, no. 5 (Spring 1978), pp. 9-65.

Susanne Klausen, ‘“For the Sake of the Race”: Eugenic Discourses of Feeblemindedness and Motherhood in the South African Medical Record, 1903-1926,’ Journal of Southern African Studies, vol. 23, no. 1 (March 1997), pp. 27-50.

Antora Mahmud Khan and Syed Masud Ahmed, ‘“Why do I have to Clean Teeth Regularly?” Perceptions and State of
Oral and Dental Health in a Low-income Rural Community in Bangladesh’ (Dhaka: BRAC, 2011).

Truby King, The Story of the Teeth and How to Save Them (Auckland: Whitcombe & Tombes, 1935).

Further Reading:

Naomi Murakawa, ‘Toothless: The Methamphetamine “Epidemic,” “Meth Mouth,” and the Racial Construction of Drug Scares,’ Du Bois Review, vol. 8, no. 1 (2011), pp. 219-228.

Alyssa Picard, Making the American Mouth: Dentists and Public Health in the Twentieth Century. (New Brunswick: Rutgers University Press. 2009).

David Sonstrom, ‘Teeth in Victorian Art,’ Victorian Literature and Culture, vol. 29, no. 2 (2001), pp. 351-382.

* This photograph is from Retronaut.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

No More Cakes or Biscuits

This year marks the centenary of the end of the South African War (1899-1902), a conflict which, it’s not too much to claim, produced modern South Africa, geographically, politically, economically, and, to some extent, socially. There is, unsurprisingly, a vast scholarship on the war, ranging from, for example, more recent sallies into the multiple ways in which it’s been commemorated, and medical histories of the concentration camps established for Boer and African refugees, to more old-fashioned accounts of its battles and sieges.

There are a few – interesting – lacunae in this research, and one of these is around food. For various reasons I’ve recently been doing some work around children in the war, and I’ve been struck how so many of the sources I’ve read are preoccupied with food. This isn’t really surprising. As Lizzie Collingham demonstrates in her recent book, The Taste of War: World War Two and the Battle for Food, it’s during war that the ways in which food is processed, distributed, sold, and valued become particularly significant to states. Food can be made a weapon of war.

Ironically, diets often improve during times of war, and this is particularly true of people who, in peacetime, can’t afford to feed themselves well. Italy during the First World War is an excellent example of this. Italian diets declined in the late nineteenth century because of exponential population growth and poor systems of distribution. The majority of Italians ate what was, essentially, a nutritionally inadequate pre-industrial diet based on cereals and legumes, supplemented occasionally with vegetables, and, even more rarely, with meat and dairy products.

What changed in 1914 was that the Italian state took control over the distribution of food. Carol Helstosky explains:

Wartime ministers were reluctant to take action, but their policies made a dramatic impact on food habits. Italy was ill prepared for war and survived on allied loans and wheat shipments. This situation benefitted consumers, who enjoyed cheap, subsidised bread and could afford to purchase foods like meat, milk, or fresh produce. Wheat bread and pasta became the foundation of diet for many Italians, replacing corn, chestnuts, and rice. … At the war’s end, public debate about the bread subsidy indicated that state intervention brought Italy to a political crossroads: should the government continue to foot the bill for a higher standard of food consumption? Would consumers be forced to choose between the necessity of bread and the luxury of meat as bread prices adjusted to the market?

Something similar occurred in Britain during the Second World War, where the strict system of rationing controlled by Lord Woolton’s Ministry of Food ensured not only that there was enough food to go around, but that most people ate fairly well. All adults received regular – if small – rations of butter, meat, sugar, and eggs. Everyone was encouraged to eat fruit, vegetables, and fish. For poor families who had subsisted on cheap white bread and sweet tea before the war, this represented a considerably healthier and more varied diet.

A combination of increased exercise and this standardised, if limited diet, relatively low in saturated fat and sugar meant that the health of the British population actually improved in the 1940s. This is not, though, to romanticise the effects of conflict on people’s diets. Millions of people died of starvation during the Second World War, as Timothy Snyder explains in his review of A Taste for War:

The Germans and the Japanese lost the war and returned to home territory and home islands. The Germans had hoped to supply themselves for eternity with grain from the rich black soil of Ukraine; but in fact they got very little. This is because, as Collingham demonstrates, war itself tends to disrupt labour, harvests and markets. Even if the intention of the Germans had not been to cause starvation, invasions tend to do so. Some two million people starved to death in French Indochina. At least 10 million starved in China, whose army was living from the land on its own territory. About three million starved in Bengal in British India.

This latter description of disrupted and destroyed food supplies seems to apply more accurately to the South African War. In fact, understanding how and why people were able to access food during the conflict helps us to create a more nuanced understanding of power within South African society during this period. There was enough food to go around – the tragedy was that it didn’t get to those people who needed it.

Indeed, the images most usually associated with the conflict are photographs of emaciated Boer children in concentration camps. These are both testimony to the war’s heavy toll on civilian lives – around 28,000 Boers died in the camps, 22,000 of them under the age of sixteen – as well as an indictment of British mismanagement of the concentration camps. And this, of course, is to say nothing of the even worse organised and provisioned camps for Africans, where both adults and children were used as free labour.

People went hungry in the camps because the British army hugely underestimated the logistics of supplying around 110,000 Boer inhabitants with food and water. The first camps were established early in 1901, in response to the Boer decision to switch to guerrilla warfare after the British annexation of the Transvaal in October 1900. Boer commandoes relied on the network of homesteads across South Africa’s rural interior for support, and it was these households – run overwhelmingly by Boer women – that the British targeted in their scorched earth tactics to end the guerrilla war.

Homesteads were burned or dynamited, crops and livestock were either commandeered or destroyed, and Boer women and children and their African servants were sent to camps. Rations were meagre. Emily Hobhouse, the British humanitarian who campaigned to bring the appalling mismanagement of the Boer – but not the African – camps to the attention of British politicians, wrote to her brother in March 1901:

Couldn’t you and your household try living for – say – a month on the rations given here in the camps? I want to find out whether it is the small amount of food the children suffer from so much, or its [sic] monotony or the other abnormal conditions under which they live. …

Coarse meal: 1lb a head daily

Meat (with bone): ½lb a head daily

Coffee: 1oz a head daily

Sugar: 2oz a head daily

Salt: ½oz a head daily

You must promise faithfully to abjure every other meat and drink – only adding for the children one-twelfth part of a tin of condensed milk a day.

Leonard Hobhouse did not do as his sister suggested, but her speculation that this inadequate diet, alongside the chaos and poor sanitation of the camps, left children particularly vulnerable to the epidemics of measles and typhoid which swept the camps, was correct.

Because of Hobhouse’s campaigning, rations did improve in the camps for Boers. Race, clearly, determined which interned people had access to food: Africans received even smaller rations than did Boers, and these did not increase after the international outcry about the concentration camps – summed up, famously, in Henry Campbell-Bannerman’s ‘methods of barbarism’ speech in June 1901.

Even within the Boer camps, though, there were divisions between those women who were able to buy provisions from the British army, and those who had arrived without money or possessions – and a large proportion of the Boer families in the camps were very poor.

In Johannesburg, this link between class and access to food was particularly evident. Isabella Lipp, the wife of the manager of the African Banking Corporation, kept a diary between the outbreak of war in October 1899, and the capture of Johannesburg by the British in June the following year. Although she complained occasionally of certain foodstuffs – butter, eggs, meat – not being available, throughout this early phase of the war, she and her husband were well fed. But this was not the case for the impoverished Boer women living in the city:

Thirty women, wives, etc. of the police (Zarps) now at the front ran ‘Amok’ as the newspaper heads it, poor things they and their children were starving so they made a desperate raid on some small provisions stores and in spite of the resistence [sic] of special police and constables, effected an entrance and helped themselves to food and who could blame them, certainly not their paternal Government who had neglected giving them their absent breadwinners wages which were due at the end of October.

The situation was considerably more desperate in the towns – Ladysmith, Mafeking, and Kimberley – to which the Boers laid siege during the first six months of the war. As food stocks ran low, Africans were either forced out or encouraged to leave – putting them at the mercy of Boer soldiers – to reduce the numbers of people dependent on rations.

In Kimberley, Lillian Hutton, the wife of a local minister, kept a diary over the course of the siege. The slow reduction of the food available to the inhabitants of the town – and rationing was introduced in December 1899 – signalled the ever more desperate state of Kimberley, as fresh supplies were halted by the Boers. While she noted with amusement in November that Colonel Robert Kekewich – under whose command Kimberley fell – had ordered that ‘No more cakes or biscuits to be made’, she became increasingly critical of the British army as the siege progressed.

As beef and mutton ran out, horses and donkeys were slaughtered for meat. Milk became scarce. She wrote in January 1900:

Mr Alec Hall’s cow, that was giving good milk, has been commandeered by the military to be killed, in spite of the fact that children and sick folk are dying in nos. for want of milk. … Mr Wilkinson had a splendid milk cow, which had just calved, when it was commandeered. These things are a scandal to the military rule of the town. The officers are living on the best of everything in the midst of widespread sickness and want and starvation.

White babies wanted fresh milk, but it’s unlikely that black babies received any adequate nutrition at all. Africans in Kimberley were allotted only mealie meal. Of the 1,500 people who died during the siege – which was ended in February 1900 – nearly all of them were African.

So although in Kimberley, the other siege towns, Johannesburg, the concentration camps, and in all the parts of South Africa under military command, everyone experienced the effects of either government or army control of the food supply, access to food was still mediated by race and class.

The study of food in the South African War also sheds light on contemporary concerns about food. Firstly, as diaries and letters written during the conflict demonstrate, most middle-class and, indeed, poor inhabitants of South African towns and cities at the turn of the century were reliant on shops to buy their food. The idea that ‘we’ (whoever ‘we’ may be) once (whenever that was) grew all our own food is disproved fairly neatly by desperate Kimberley housewives unable to find eggs, milk, or fresh vegetables at the grocer. In fact, Lillian Hutton commented on the novelty of people in Kimberley giving over their flower gardens to vegetables.

Secondly, there has been a vogue recently for holding up Britain’s experience of rationing as a potential solution for both the country’s obesity epidemic, as well as the current, global food crisis. While I agree that eating less meat and dairy, using up leftovers, and other wartime strategies are excellent means of encouraging healthy eating and reducing food waste, we need to be careful of fetishizing austerity.

And, thirdly: we must acknowledge the significance of distribution systems to ensuring that all people receive an adequate supply of food. When shops in rural areas are badly provisioned; when social grants are not paid timeously; when officials steal food intended for the very poor, people go hungry.

Sources

Elizabeth Ann Cripps, ‘Provisioning Johannesburg, 1886-1906’ (MA thesis, Unisa, 2012).

Carol Helstosky, Garlic & Oil: Food and Politics in Italy (Oxford and New York: Berg, [2004] 2006).

Emily Hobhouse: Boer War Letters, ed. Rykie van Reenen (Cape Town and Pretoria: Human & Rousseau, 1984).

Bill Nasson, The Boer War: The Struggle for South Africa (Stroud: The History Press, [2010] 2011).

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

A Sporting Chance

My expectations of the London Olympics’ opening ceremony were so low that, I suppose, I would have been impressed if it had featured Boris as Boudicca, driving a chariot over the prostate figures of the Locog committee. (Actually, now that I think about it, that would have been fairly entertaining.)

Appalled by the organising committee’s slavishly sycophantic attitude towards its sponsors and their ‘rights’ – which caused them to ban home knitted cushions from being distributed to the Olympic athletes, and to require shops and restaurants to remove Olympic-themed decorations and products – as well the rule that online articles and blog posts may not link to the official 2012 site if they’re critical of the games, the decision to make the official entrance of the Olympic site a shopping mall, and the creation of special lanes for VIP traffic, I wasn’t terribly impressed by the London Olympics.

But watching the opening ceremony last night, I was reduced to a pile of NHS-adoring, Tim Berners-Lee worshipping, British children’s literature-loving goo. Although a reference to the British Empire – other than the arrival of the Windrush – would have been nice, I think that Danny Boyle’s narrative of British history which emphasised the nation’s industrial heritage, its protest and trade union movements, and its pop culture, was fantastic.

As some commentators have noted, this was the opposite of the kind of kings-and-queens-and-great-men history curriculum which Michael Gove wishes schools would teach. Oh and the parachuting Queen and Daniel Craig were pretty damn amazing too.

There was even a fleeting, joking reference to the dire quality of British food during the third part of the ceremony. There was something both apt, but also deeply ironic about this. On the one hand, there has been extensive coverage of Locog’s ludicrous decision to allow manufacturers of junk food – Coke, Cadbury’s, McDonald’s – not only to be official sponsors of a sporting event, but to provide much of the catering. (McDonald’s even tried to ban other suppliers from selling chips on the Olympic site.)

But, on the other, Britain’s food scene has never been in better shape. It has excellent restaurants – and not only at the top end of the scale – and thriving and wonderful farmers’ markets and street food.

It’s this which makes the decision not to open up the catering of the event to London’s food trucks, restaurants, and caterers so tragic. It is true that meals for the athletes and officials staying in the Village have been locally sourced and made from ethically-produced ingredients, and this is really great. But why the rules and regulations which actually make it more difficult for fans and spectators to buy – or bring their own – healthy food?

Of course, the athletes themselves will all be eating carefully calibrated, optimally nutritious food. There’s been a lot of coverage of the difficulties of catering for so many people who eat such a variety of different things. The idea that athletes’ performance is enhanced by what they consume – supplements, food, and drugs (unfortunately) – has become commonplace.

Even my local gym’s café – an outpost of the Kauai health food chain – serves meals which are, apparently, suited for physically active people. I’ve never tried them, partly because the thought of me as an athlete is so utterly nuts. (I’m an enthusiastic, yet deeply appalling, swimmer.)

The notion that food and performance are linked in some way, has a long pedigree. In Ancient Greece, where diets were largely vegetarian, but supplemented occasionally with (usually goat) meat, evidence suggests that athletes at the early Olympics consumed more meat than usual to improve their performance. Ann C. Grandjean explains:

Perhaps the best accounts of athletic diet to survive from antiquity, however, relate to Milo of Croton, a wrestler whose feats of strength became legendary. He was an outstanding figure in the history of Greek athletics and won the wrestling event at five successive Olympics from 532 to 516 B.C. According to Athenaeus and Pausanius, his diet was 9 kg (20 pounds) of meat, 9 kg (20 pounds) of bread and 8.5 L (18 pints) of wine a day. The validity of these reports from antiquity, however, must be suspect. Although Milo was clearly a powerful, large man who possessed a prodigious appetite, basic estimations reveal that if he trained on such a volume of food, Milo would have consumed approximately 57,000 kcal (238,500 kJ) per day.

Eating more protein – although perhaps not quite as much as reported by Milo of Croton’s fans – helps to build muscle, and would have given athletes an advantage over other, leaner competitors.

Another ancient dietary supplement seems to have been alcohol. Trainers provided their athletes with alcoholic drinks before and after training – in much the same way that contemporary athletes may consume sports drinks. But some, more recent sportsmen seem to have gone a little overboard, as Grandjean notes:

as recently as the 1908 Olympics, marathon runners drank cognac to enhance performance, and at least one German 100-km walker reportedly consumed 22 glasses of beer and half a bottle of wine during competition.

Drunken, German walker: I salute you and your ability to walk in a straight line after that much beer.

The London Olympic Village is, though, dry. Even its pub only serves soft drinks. With the coming of the modern games – which coincided with the development of sport and exercise science in the early twentieth century – diets became the subject of scientific enquiry. The professionalization of sport – with athletes more reliant on doing well in order to make a living – only served to increase the significance of this research.

One of the first studies on the link between nutrition and the performance of Olympic athletes was conducted at the 1952 games in Helsinki. The scientist E. Jokl (about whom I know nothing – any help gratefully received) demonstrated that those athletes who consumed fewer carbohydrates tended to do worse than those who ate more. Grandjean comments:

His findings may have been the genesis of the oft-repeated statement that the only nutritional difference between athletes and nonathletes is the need for increased energy intake. Current knowledge of sports nutrition, however, would indicate a more complex relationship.

As research into athletes’ diets has progressed, so fashions for particular supplements and foods have emerged over the course of the twentieth century. Increasing consumption of protein and carbohydrates has become a common way of improving performance. Whereas during the 1950s and 1960s, athletes simply ate more meat, milk, bread, and pasta, since the 1970s, a growing selection of supplements has allowed sportsmen and –women to add more carefully calibrated and targeted forms of protein and carbohydrates to their diets.

Similarly, vitamin supplements have been part of athletes’ diets since the 1930s. Evidence from athletes competing at the 1972 games in Munich demonstrated widespread use of multivitamins, although now, participants tend to choose more carefully those vitamins which produce specific outcomes.

But this history of shifting ideas around athletes’ diets cannot be understood separately from the altogether more shadowy history of doping – of using illicit means of improving one’s performance. Even the ancient Greeks and Romans used stimulants – ranging from dried figs to animal testes – to suppress fatigue and boost performance.

More recently, some of the first examples of doping during the nineteenth century come from cycling (nice to see that some things don’t change), and, more specifically, from long-distance, week-long bicycle races which depended on cyclists’ reserves of strength and stamina. Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen explain:

A variety of performance enhancing mixtures were tried; there are reports of the French using mixtures with caffeine bases, the Belgians using sugar cubes dripped in ether, and others using alcohol-containing cordials, while the sprinters specialised in the use of nitroglycerine. As the race progressed, the athletes increased the amounts of strychnine and cocaine added to their caffeine mixtures. It is perhaps unsurprising that the first doping fatality occurred during such an event, when Arthur Linton, an English cyclist who is alleged to have overdosed on ‘tri-methyl’ (thought to be a compound containing either caffeine or ether), died in 1886 during a 600 km race between Bordeaux and Paris.

Before the introduction of doping regulations, the use of performance enhancing drugs was rife at the modern Olympics:

In 1904, Thomas Hicks, winner of the marathon, took strychnine and brandy several times during the race. At the Los Angeles Olympic Games in 1932, Japanese swimmers were said to be ‘pumped full of oxygen’. Anabolic steroids were referred to by the then editor of Track and Field News in 1969 as the ‘breakfast of champions’.

But regulation – the first anti-drugs tests were undertaken at the 1968 Mexico games – didn’t stop athletes from doping – the practice simply went underground. The USSR and East Germany allowed their representatives to take performance enhancing drugs, and an investigation undertaken after Ben Johnson was disqualified for doping at the Seoul games revealed that at least half of the athletes who competed at the 1988 Olympics had taken anabolic steroids. In 1996, some athletes called the summer Olympics in Atlanta the ‘Growth Hormone Games’ and the 2000 Olympics were dubbed the ‘Dirty Games’ after the disqualification of Marion Jones for doping.

At the heart of the issue of doping and the use of supplements, is distinguishing between legitimate and illegitimate means of enhancing performance. The idea that taking drugs to make athletes run, swim, or cycle faster, or jump further and higher, is unfair, is a relatively recent one. It’s worth noting that the World Anti-Doping Agency, which is responsible for establishing and maintaining standards for anti-doping work, was formed only in 1999.

What makes anabolic steroids different from consuming high doses of protein, amino acids, or vitamins? Why, indeed, was Caster Semenya deemed to have an unfair advantage at the 2009 IAAF World Championships, but the blade-running Oscar Pistorius is not?

I’m really pleased that both Semenya and Pistorius are participating in the 2012 games – I’m immensely proud that Semenya carried South Africa’s flag into the Olympic stadium – but their experiences, as well as the closely intertwined histories of food supplements and doping in sport, demonstrate that the idea of an ‘unfair advantage’ is a fairly nebulous one.

Further Reading

Elizabeth A. Applegate and Louis E. Grivetti, ‘Search for the Competitive Edge: A History of Dietary Fads and Supplements,’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 869S-873S.

Ann C. Grandjean, ‘Diets of Elite Athletes: Has the Discipline of Sports Nutrition Made an Impact?’ The Journal of Nutrition, vol. 127, no. 5 (2007), pp. 874S-877S.

Richard IG Holt, Ioulietta Erotokritou-Mulligan, and Peter H. Sönksen, ‘The History of Doping and Growth Hormone Abuse in Sport,’ Growth Hormone & IGF Research, vol. 19 (2009), pp. 320-326.

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Children’s Food

I’m writing this post while listening to this week’s podcast of BBC Radio 4’s Food Programme. The episode is about nine year-old food writer Martha Payne, whose blog about the dinners served at her school became the cause of a strange and troubling controversy a month ago.

Martha uses her blog, NeverSeconds, to review the food she eats at school. As Jay Rayner points out, although she may criticise – rightly – much of which the school provides for lunch, NeverSeconds is not intended as a kind of school dinners hatchet job. She rates her meals according to a Food-o-Meter, taking into account how healthy, but also how delicious, they are.

As her blog has grown in popularity, children from all over the world have contributed photographs and reviews, and it’s partly this which makes Never Seconds so wonderful: it’s a space in which children can discuss and debate food.

NeverSeconds came to wider – global – notice when the Argyll and Bute Council tried to shut it down in June, after the Daily Record published an article featuring Martha cooking with the chef Nick Nairn, headlined ‘Time to fire the dinner ladies.’ The blog’s honest descriptions and pictures of some of the food served to schoolchildren can’t have pleased councillors either.

As Private Eye (no. 1317) makes the point, the council’s bizarre – and futile – attempts to silence a blog probably had as much to do with internal politicking and minor corruption as anything else, but the furore which erupted after the ban also said a great deal about attitudes towards food and children.

What is really scandalous about the blog is that it reveals how bad – how unhealthy, how heavily processed – school meals can be. When Jamie Oliver launched a campaign in 2005 to improve the quality of school dinners in the UK, his most shocking revelations were not, I think, that children were being fed Turkey Twizzlers and chips for lunch, but, rather, that the British government is willing to spend so little on what children eat at school. Last year, the state spent an average of 67p per primary school pupil per meal, per day. This rose to 88p for those in high school.

Michael Gove has recently announced another inquiry into the quality of school meals – this time headed up by the altogether posher-than-Jamie Henry Dimbleby, the founder of the Leon chain of restaurants, who also seems to spend the odd holiday with the Education Secretary in Marrakech. It’s a tough life.

But as Sheila Dillon comments during this episode of the Food Programme:

Martha Payne, a nine year-old who seems to understand better than many adults, that dinner ladies, or even individual school kitchens, are not the source of the school dinner problem. It has far deeper roots.

When did it become acceptable to serve schoolchildren junk food for lunch? The way we feed children tells us a great deal about how we conceptualise childhood. Or, put another way, what we define as ‘children’s food’ says as much about our attitudes towards food as it does about children.

The idea that children should be fed separately to adults has a relatively long pedigree. The Victorians argued that children – and women – should be fed bland, carbohydrate-heavy meals to prevent their delicate digestive systems from being exerted. Fruit, meat, spices, and fresh vegetables should be eaten only in strict moderation.

There is, of course, a disconnect between what experts – medical professionals, childrearing specialists – recommend, and what people actually eat. In the late nineteenth-century Cape Colony, for instance, the pupils at an elite girls’ school near Cape Town were fed a diet rich in red meat and fresh fruit and vegetables.

But the belief that children’s bodies are delicate and potentially vulnerable to disruption was an indicator of shifts in thinking about childhood during the mid and late nineteenth century. The notion that children need to be protected – from work, hunger, poverty, and exploitation and abuse from adults – emerged at around the same time. As children were to be shielded from potential danger, so they were to eat food which, it was believed, was ideally suited to digestive systems more susceptible to upset and illness than those of adults.

But as scientists became interested in the relationship between food and health – in nutrition, in other words – towards the end of the 1800s, paediatricians, demographers, and others concerned about high rates of child mortality during the early twentieth century began to look more closely at what children were being fed. For instance, in the 1920s and 1930s, scientists in Britain and the United States drew a connection between the consumption of unhealthy or diseased food – particularly rotten milk – and high rates of diarrhoea, then almost always fatal, among children in these countries.

They were also interested in what should constitute a healthy diet for a child. As childhood became increasingly medicalised in the early twentieth century – as pregnancy, infancy, and childhood became seen as periods of development which should be overseen and monitored by medical professionals – so children’s diets became the purview of doctors as well. As RJ Blackman, the Honorary Surgeon to the Viceroy of India (no, me neither), wrote in 1925:

Food, though it is no panacea for the multitudinous ills of mankind, can do much, both to make or mar the human body. This is particularly so with the young growing child. All the material from which his body is developed has to come from the food he eats. Seeing that he doubles or trebles his weight in the first year of life, and increases it twenty-fold by the time he reaches adult stature, it will be seen that food has much to accomplish. Naturally, if the food be poor, the growth and physique will be poor; and if good, the results will be good.

Informed by recent research into dietetics, doctors advised parents to feed their children varied diets which included as much fresh, vitamin-containing produce as possible. In a popular guide to feeding young children, The Nursery Cook Book (1929), the former nurse Mrs K. Jameson noted:

Many years ago, I knew a child who was taken ill at the age of eight years, and it was thought that one of her lungs was affected. She was taken to a children’s specialist in London. He could find nothing radically wrong, but wrote out a diet sheet. By following this…the child became well in a month or two. This shows how greatly the health is influenced by diet.

This diet, she believed, should be designed along scientific principles:

Since starting to write this book I have come across an excellent book on vitamins called ‘Food and Health’ (Professor Plimmer), and I have found it very helpful. I have endeavoured to arrange the meals to contain the necessary vitamins, as shown in the diagram of ‘A Square Meal’ at the beginning of the book.

Indeed, she went on to explain that children who were properly fed would never need medicine.

In 1925, advising mothers on how to wean their babies in the periodical Child Welfare, Dr J. Alexander Mitchell, the Secretary for Public Health in the Union of South Africa, counselled against boiling foodstuffs for too long as it ‘destroys most of the vitamins.’ He argued that children’s diets ‘should include a good proportion of proteins or fleshy foods and fats’, as well as plenty of fruit, fresh vegetables, milk, and ‘porridge…eggs, meat, juice, soups’.

What is so striking about the diets described by Mitchell, Jameson, and others is how similar they were to what adults would have eaten. Children were to eat the same as their parents, but in smaller quantities and in different proportions. For example, some doctors counselled again children being allowed coffee, while others believed that they should limit their intake of rich foods.

So what is the origin of the idea that children should be cajoled into eating healthily by making food ‘fun’? Mrs Jameson’s recipes might have cute names – she calls a baked apple ‘Mr Brownie with his coat on’ – but they’re the same food as would be served to adults. Now, our idea of ‘children’s food’ differs from that of the 1920s and 1930s. When we think of children’s food, we imagine sweets, soft white sandwich bread, pizza, hotdogs, and brightly coloured and oddly shaped foodstuffs designed to appeal to children.

As Steven Mintz argues in his excellent history of American childhood, Huck’s Raft (2004), the 1950s and 1960s were child-oriented decades. Not only were there more children as a result of the post-war baby boom, but with the growing prosperity of late twentieth-century America, more money was spent on children than ever before. Families tended to be smaller, and increasing pocket money transformed children into mini-consumers.

Children either bought, or had their parents buy for them, a range of consumer goods aimed at them: from clothes and toys, to ‘child-oriented convenience foods… – “Sugar Frosted Flakes (introduced in 1951), Sugar Smacks (in 1953), Tater Tots (in 1958), and Jiffy Pop, the stovetop popcorn (also in 1958).’

The same period witnessed a shift in attitudes towards childrearing. Families became increasingly child-centred, with meals and routines designed around the needs of children, rather than parents. In many ways, this was a reaction against the orthodoxies of the pre-War period, which tended to emphasise raising children to be obedient, well-behaved, and self-disciplined.

So the definition of children’s food changed again. For the parents of Baby Boomers, food was made to be appealing to children. Fussiness was to be accommodated and negotiated, rather than ignored. And children’s desire for food products advertised on television was to be indulged.

I am exaggerating to make a point – in the US and the UK children during the 1960s and 1970s certainly ate less junk than they do now, and this new understanding of children’s food emerged in different ways and at different times in other parts of the world – but this change represented a bonanza for the burgeoning food industry. Although the industry’s attempts to advertise to children are coming under greater scrutiny and regulation (and rightly so), it does have a vested interest in encouraging children and their parents to believe that that is what constitutes good food for children.

I think that it’s partly this shift in thinking about children’s relationship with food – that they should eat only that which they find appealing, and that children will only eat food which is ‘fun’, brightly coloured, oddly shaped, and not particularly tasty – that allowed for the tolerance of such poor school food for so long in Britain.

Martha’s blog is a powerful corrective to this: she, her classmates, and contributors all have strong opinions about what they eat, and they like a huge variety of food – some of it sweets, but most of it is pretty healthy. The irony is that in – apparently – pandering to what children are supposed to like, politicians and policy makers seem to find listening to what a child has to say, fairly difficult. If we’re to persuade children to eat well, then not only should we encourage them to talk and to think about food, but we need to listen to what they have to say about it.

Further Reading

Linda Bryder, A Voice for Mothers: The Plunket Society and Infant Welfare, 1907-2000 (Auckland: Auckland University Press, 2003).

Deborah Dwork, War is Good for Babies and Other Young Children: A History of the Infant and Child Welfare Movement in England 1898-1918 (London and New York: Tavistock Publications, 1987).

Steven Mintz, Huck’s Raft: A History of American Childhood (Cambridge, Mass.: Belknap Press, 2004).

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Quackers

Patient and loyal readers – immense apologies for the absence of this week’s blog post. I have just emerged from this semester’s marking hell, so normal service will resume this weekend. (For colleagues currently contemplating the point of their existence and wondering why they didn’t become professional tennis players, I give you this, and this. They helped immeasurably.)

This is, then, just a short post to point you in the direction of a recent ruling by South Africa’s Advertising Standards Authority. Last year, the respected NGO Equal Education laid an official complaint with the ASA about radio advertisements for a nutritional supplement called Smart Kids Brain Boost developed by quack nutritionist Patrick Holford. In the ads, Holford claimed that the product would improve ‘mental vitality’ (whatever that is) and better children’s performance at school.

As a submission from Harris Steinman demonstrates – in exhaustive detail – Holford’s claims are based on a clutch of peer-reviewed articles (good) whose research is outdated (not good) and which occasionally contradict him (really bad). This is not the first time that the ASA has ruled against Holford – an earlier complaint lodged by Steinman (who’s a real doctor) against the Mood Food nutritional supplement was upheld. Steinman proved that it was unlikely that Holford’s pills would make people feel happier or more motivated.

This most recent decision by the ASA pleases me enormously. Not only does it strike a blow against the nutrition industry which peddles the misinformation that all people need to take supplements in order to be healthy and happy, but it prevents a very wealthy man from benefitting from parents’ credulity. South Africa’s education system is dysfunctional, and it is likely that pupils’ poor performance is linked partly to bad diets. But these diets will not be improved by taking magic tablets. Only by alleviating poverty, and ensuring that parents are able to afford to buy the fruit, vegetables, protein, and carbohydrates that constitute healthy diets, will children’s performance at school improve.

This post owes a great deal to the excellent work done by the magnificently-named Quackdown! Do check it out.

Feed the Children

There has been some fuss recently around the publication of Charles Murray’s new book, Coming Apart: The State of White America, 1960-2010. Murray, who co-authored The Bell Curve: Intelligence and Class Structure in American Life in 1994, has a reputation for annoying left-leaning academics and public policy makers. His description of the Bell Curve was accused of being blind to cultural and social influences on learning and childhood development, and his most recent polemic has been criticised for its rose-tinted view of the American white working class during the mid-twentieth century.

One of the best criticisms of the book which I’ve come across is Nell Irvin Painter’s article for the New York Times, ‘When Poverty was White.’ Painter, whose History of White People (2010) I urge you to read, makes the point that America has a well-hidden and very recent history of white poverty. She accuses Murray of ‘historical blindness’ caused by his

narrow focus on the cultural and policy changes of the 1960s as the root of white America’s decline. The story of white poverty…is much longer and more complex than he and his admirers realise or want to admit.

Her point is that to understand the nature of poverty – why some families seem incapable of escaping it, why certain members of society seem to be particularly susceptible to it – we need to historicise it.

There is a similar argument to be made about white poverty in South Africa. One of the reasons why photographs of poor whites in South Africa draw such attention is because South Africans tend to think of poverty as being black. Poor whites are a strange anomaly in the economic and racial politics of post-1994 South Africa.

But ‘poor whiteism’ as a social and political phenomenon only disappeared during the economic boom of the early 1960s. Since at least the 1920s, South African governments were preoccupied by the ‘poor white problem’ – by the existence of a substantial group of people who, as the popular author Sarah Gertrude Millin wrote in 1926, could not support themselves ‘according to a European standard of civilisation’ and who could not ‘keep clear the line of demarcation between black and white.’

South Africa’s earliest soup kitchens were not for black, but, rather, for white children. The first child welfare organisations aimed their work not at black families, but, rather, at white families who were poor. South Africa’s attempts to introduce compulsory elementary education in the 1910s and 1920s pertained only to white, not to black, children. This isn’t to suggest that black poverty was somehow less acute or widespread than white poverty. Far from it. State concern about poor whiteism was borne out of a eugenicist belief that, as Millin suggested, white poverty signalled a decline in white power.

The first attempts to eradicate white poverty were directed at families and children. Although we tend to associate the poor white problem with the 1920s and 1930s, there had been a large group of impoverished white farmers in the country’s rural interior since at least the middle of the nineteenth century. By the 1880s and 1890s, colonial politicians, and particularly those in the Cape, were increasingly anxious about this class of whites. This was partly because the numbers of impoverished whites – both in rural and urban areas – had increased during the region’s industrialisation after the discovery of diamonds and gold, but it was also the result of decades of poor education which had produced at least two generations of unemployable whites.

Both in South Africa and in the rest of the world, poverty was racialised during the 1880s and 1890s. The existence of unemployed and unemployable poor whites challenged the association of ‘natural’ supremacy and the exercise of power with whiteness. The term ‘poor white’ no longer simply referred to white people who lived in poverty, but, rather, invoked a set of fears around racial mixing and white superiority.

Impoverished white adults were believed to be beyond saving, as one Cape industrialist argued in 1895: ‘the adults are irreclaimable. You must let them die off, and teach the young ones to work.’ The Cape government poured money into schools for poor white children. In 1905, education became compulsory for all white children in the Cape between the ages of seven and fourteen. Politicians also passed legislation to allow these children to be removed from parents deemed to be unable to care for them appropriately. After the declaration of the Union of South Africa in 1910, government spending on education grew from 14 per cent of the national budget to 28 per cent in 1930.

But the problem did not go away. Industrialisation and economic expansion, as well as the effects of the Great War, two depressions, and urbanisation in the 1920s and 1930s increased the numbers of impoverished whites. By the end of the 1920s, it was estimated that out of a total of 1,800,000 whites, 300,000 were ‘very poor’, and nearly all of these were Afrikaans. The Carnegie Commission of Investigation on the Poor White Question (1929-1932) concluded that an inability to adapt to a changing economic climate, outdated farming methods, and poor education were to blame for the existence of such a large population of impoverished whites.

In 1929, the South African government devoted 13 per cent of its budget to the eradication of white poverty. Much of this went to education, social welfare, and housing. The introduction of more stringent segregationist legislation progressively disenfranchised blacks, and reserved skilled work for whites.

There was also a shift in emphasis in how child welfare societies – the numbers of which had mushroomed during the 1920s – dealt with poor white children. No longer did they only work to ensure that white children were sent to school and adequately cared for by their parents, but they began to focus on how these children were fed.

I’m still trying to account for this new concern about the effects of malnutrition on white children. I think that it was due largely to an international scientific debate about the significance of nutrition in raising both physically and intellectually strong children. Louis Leipoldt – Medical Inspector for Schools in the Transvaal, food writer, Buddhist, poet, and Afrikaner culture broker – was particularly aware of this new thinking about childhood development and nutrition, and wrote about it extensively in publications on child health and welfare in South Africa.

In a report of a survey of the health of children in the Cape published in 1922, the province’s Medical Inspector of Schools, Elsie Chubb, argued that malnutrition was widespread in the Cape’s schools for white children. In most schools, around 10% of the pupils were malnourished. In one school in the rural Karoo, 79% of children were found to be severely malnourished.

Chubb recognised that malnutrition was not purely the result of an inadequate supply of food – although it was certainly the case that many poor parents simply couldn’t afford to buy enough food to feed their children – but of poor diet. Some child welfare volunteers wrote of children sent to school on coffee and biltong, and who returned home at the end of the day for a basic supper of maize meal and cheap meat. Chubb wrote that far too many children were fed on a diet heavy in carbohydrates and animal protein. Children did not eat enough fresh fruit and vegetables, and milk. She recommended that feeding schemes be established to supplement children’s diets with these foodstuffs.

Helen Murray, the headmistress of a girls’ school in Graaff Reinet and active member of the town’s child welfare society explained the contemporary understanding of the link between malnutrition and poor whiteism particularly well in 1925:

In the winter of 1918 our schools had regular medical inspection for the first time. The doctor who inspected told some of us that he had found some fifty children in our poor school suffering from malnutrition and spoke strongly of the results of such a condition. The children were not in danger of dying of starvation, they had dry bread and black coffee enough to prevent that, but they were in danger of growing up to be ‘poor whites’ of the most hopeless type. The body insufficiently nourished during the years of growth would develop physically weak, and the brain as a result would be unfit for real mental effort. The child suffering from years of wrong feeding could not be expected to grow into the strong, healthy, clearheaded man or woman our country needs today, and will need ten and twenty years hence. To see that the underfed child is well fed is not a matter of charity, but must be undertaken in self-defence.

As a result of the inspection, the child welfare society found a room in the town where between fifty and ninety children could be provided with ‘a good, hot meal’ on every school day:

We had been told that these children could be saved from growing up weaklings if they could have one good meal of fat meat, vegetables or fruit, on every school day of the year….

We have the satisfaction of knowing that there has been a marked improvement in the health of the children and of hearing from a Medical Inspector that she has found the condition of the children here better than in many other schools of the same class.

Murray’s experience in Graaff Reinet was not unique. As child welfare societies were established in the towns and villages of South Africa’s vast interior, their first work was usually to establish soup kitchens, either in schools or in a central locations where schoolchildren could be sent before the school day – for porridge and milk – and at lunchtime, for soup or a more substantial meal, depending on the resources of the local society.

In Pietersburg (now Polokwane), to eliminate the stigma of free meals for poor children, all white children were provided with a mug of soup at lunchtime. Better-off parents paid for the soup, thus subsidising those children whose parents could not contribute. In Reitz, local farmers, butchers, and grocers donated meat and vegetables to the soup kitchen, and in Oudtshoorn children were encouraged to bring a contribution – onions, carrots, or cabbage – to their daily meal.

The National Council for Child Welfare, the umbrella body established in 1924 which oversaw the activities of local child welfare societies, liked to emphasise the fact that it was concerned for the welfare of all children, regardless of class or race. Some welfare societies, and particularly those in areas which had large ‘locations’ for black residents, did establish clinics and crèches for black children. But most of the NCCW’s work was aimed at white children in the 1920s and 1930s, and the same was true of the South African state. By the 1920s, most municipalities in towns and cities made free milk available to poor white mothers with babies and very young children.

Increasing state involvement in child welfare, alongside the work of independent societies, had a significant impact on the health of white children in South Africa during the early twentieth century. But it was only because of the growing prosperity and better education of the majority of white South Africans after World War II that white poverty and malnutrition were gradually eradicated in the 1950s and 1960s.

By historicising poverty – by understanding that white prosperity in South Africa is a relatively recent phenomenon – we can understand it as a phenomenon which is not only eradicable, but which is also the product of a range of social, economic, and political forces. As South African governments and welfare organisations were able to reduce white poverty and malnutrition dramatically during the early twentieth century, so it is possible for contemporary governments to do the same.

But charity and soup kitchens were not the sole cause of the disappearance of white poverty and malnutrition. Jobs, education, and better living conditions were as – if not more – significant in ensuring that white children no longer went hungry.

Further Reading

Texts cited here:

SE Duff, ‘“Education for Every Son and Daughter of South Africa”: Race, Class, and the Compulsory Education Debate in the Cape Colony,’ in Citizenship, Modernisation, and Nationhood: The Cultural Role of Mass Education, 1870-1930, eds. Lawrence Brockliss and Nicola Sheldon (Basingstoke: Palgrave Macmillan, 2011).

E.G. Malherbe, Education in South Africa, vol. I (Cape Town: Juta, 1925).

E.G. Malherbe, Education in South Africa, vol. II (Cape Town: Juta, 1977).

E.G. Malherbe, Report of the Carnegie Commission of Investigation on the Poor White Question in South Africa, vol. III (Stellenbosch: Pro Ecclesia-Drukkery, 1932).

Sarah Gertrude Millin, The South Africans (London: Constable, 1926).

Jennifer Muirhead, ‘“The children of today make the nation of tomorrow”: A Social History of Child Welfare in Twentieth Century South Africa’ (MA thesis, Stellenbosch University, 2012).

Other sources:

Vivian Bickford-Smith, Ethnic Pride and Racial Prejudice in Victorian Cape Town (Johannesburg: Wits University Press, 1995).

Colin Bundy, ‘Vagabond Hollanders and Runaway Englishmen: White Poverty in the Cape Before Poor Whitesim,’ in Putting a Plough to the Ground: Accumulation and Dispossession in Rural South Africa 1880-1930, eds. William Beinart, Peter Delius, and Stanley Trapido (Johannesburg: Ravan Press, 1986).

J.M. Coetzee, White Writing: On the Culture of Letters in South Africa (New Haven: Yale University Press, 1988).

Saul Dubow, A Commonwealth of Knowledge: Science, Sensibility and White South Africa 1820-2000 (Oxford: Oxford University Press, 2006).

Marijke du Toit, ‘Women, Welfare and the Nurturing of Afrikaner Nationalism: A Social History of the Afrikaanse Christelike Vroue Vereniging, c.1870-1939’ (D.Phil. thesis, University of Cape Town, 1996).

Hermann Giliomee, The Afrikaners: Biography of a People (Cape Town: Tafelberg, 2003).

Isabel Hofmeyr, ‘Building a Nation from Words: Afrikaans Language, Literature and Ethnic Identity, 1902-1924,’ in The Politics of Race, Class and Nationalism in Twentieth-Century South Africa, eds. Shula Marks and Stanley Trapido (London: Longman, 1987).

Creative Commons License
Tangerine and Cinnamon by Sarah Duff is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.