Leave a comment

2.5 Food and nutrition

2.5
Food and nutrition

Prakash S. Shetty

Introduction
Intrauterine growth retardation and low birth weight
Protein–energy malnutrition
Diarrhoeal disease and malnutrition
Assessment of undernutrition in children in the community
Cognitive and mental development and malnutrition
Iron deficiency
Iodine deficiency
Vitamin A deficiency
Adult undernutrition
Diet and chronic non-communicable diseases
Diet and cardiovascular diseases

Coronary heart disease

Hypertension and stroke
Diet and cancers
Diet, lifestyles, and obesity
Non-insulin-dependent diabetes mellitus
Diet and osteoporosis
Diet and dental caries
Diet and non-cancerous conditions of the large bowel
Emerging food and nutrition issues of public health concern

Food safety

Food, the bovine spongiform encephalopathy epidemic, and Creutzfeldt–Jakob disease

Genetically modified foods

Food labelling

Functional foods

Emerging epidemic of diet-related chronic diseases and obesity in developing societies
Food and nutrition in the prevention of diseases of public health importance

The development of food-based dietary guidelines
Chapter References

Introduction
The general notion that the study of nutrition is merely aimed at providing a balanced diet for the populace is no longer adequate. Concepts of what constitutes a ‘balanced diet’ have changed markedly and it is not even an issue relating to the achievement of the ‘recommended levels’ of nutrients in the diet. Nutrition is a complex subject that biologically relates to nutrient–gene interactions and the induction of such diseases as diabetes mellitus, coronary heart disease (CHD), and cancers, and even to such conditions as asthma and impaired brain development. Nutrition also deals with the social, economic, and cultural issues related to making the right food choices and to purchasing and eating the ‘correct’ types of food in the ‘appropriate’ quantities, as well as the factors that determine this aspect of essential daily human activity and behaviour. Thus, just as our gradual acquisition of the knowledge of microbiology influenced our understanding of infectious diseases, which in turn led to preventive measures for the population at large, so the historical advances in the field of nutrition have led to a more coherent understanding of the patterns of and the prevention of food and nutrition related diseases of public health importance throughout the world.
Fluctuations in disease rates depend on environmental factors that include food and nutrition as one of the primary determinants. Nutrition is now recognized as a major determinant of a wide range of diseases of public health importance worldwide. In the developing world, numerous deficiency diseases persist, especially in the rural areas, which are the result of essential nutrient deficiencies in the daily diet. These now coexist with the increasing presence of diet-related chronic diseases in the adult typically seen only in industrialized, developed countries. Significant changes in the patterns of disease and the causes of premature death within a population have little to do with advances in curative medicine and therapeutics. The changes in health depend largely on the environmental changes, which include changes in social and economic conditions, the implementation of immunization programmes, improvements in women’s social and educational status within the society and changes in agriculture and food systems and in the availability of food. These changes are, in turn, influenced by governmental regulations and global trade that affect agricultural practices and the industrial sector and thus affect individual lifestyles. Short-term national policies that seek to maximize economic activity and promote international trade and foreign exchange earnings, ignore the impact of these measures on health. Most of these environmental influences operate through changes in the provision and access to hygienic and nutritious food; the availability of uncontaminated water, clean housing, sanitary surroundings, and lack of exposure to environmental toxins. Economic development is thus normally accompanied by improvements in the quantity and quality of a nation’s food supply. An increasingly recognized feature of these changes involves a nutritionally mediated improvement in the body’s resistance to infections and the mutual interdependence of the immune and nutritional state of the population probably explains the remarkable gains in public health in Britain the twentieth century (McKeown 1976).
These quantitative and qualitative changes in our food patterns, which lead to such dramatic changes in life expectancy, often also result in the problems of diet-related non-communicable diseases, but these problems are not inevitable. The diet-related chronic diseases occur typically in middle and later adult life and can, as in central and eastern Europe, undermine the gains in life expectancy. These chronic diseases of the developed populations are traditionally regarded as manifestations of excess intake and self-indulgence in an ‘affluent’ society. In practice, some of these chronic diseases may be compounded by relatively deficient intakes of some nutrients. Thus, a ‘balanced’ diet has to be viewed in a more sophisticated way when considering the disease patterns of adult life.
Nutrition has re-emerged as being of fundamental importance in public health after having been in the doldrums for many decades. Nutritional issues were seen in industrialized and developed societies as relating to deficiency diseases that were conquered in the early part of this century while continuing to persist in the poor, developing countries. Now, however, food and nutrition are recognized as often the principal environmental component affecting a wide range of diseases of public health importance throughout the world. These diseases reflect the cumulative impact of subtle pathophysiological processes developing over a lifetime. These interactions within a society are often seen as reflecting individual genetic susceptibility, but the different disease patterns of groups living on different diets are manifestly a societal reflection of the impact of dietary factors. The display of nutrient–gene interactions is evident, for example, in obesity, alcoholism, cardiovascular disease, non-insulin-dependent diabetes mellitus (NIDDM), many gastrointestinal disorders, neural-tube defects, and the most prevalent cancers. As clinical studies and molecular epidemiology unravel the basis for genetic susceptibility to these disorders, physicians interested in metabolic medicine are eventually forced to look for the gene inducers or repressors, which then prove to be of dietary or environmental origin. Societal features that determine human behaviour and economic well being as well as climate, tradition, culture, and the role of women, all affect food patterns and dietary practices. These are the features that need to be recognized when considering public health rather than simply the epidemiological aspects of dietary disease. This chapter seeks to take a global view of diet in public health terms and so nutritional deficiency disorders as well as other diet-related diseases of public health importance will be considered. This is particularly important because deficiency diseases are rampant in several parts of the world and yet co-exist in the same country with chronic adult diseases usually found in affluent, developed societies. Vitamin deficiencies continue to manifest themselves in refugee camps, and the threat of starvation and dietary inadequacy resulting in malnutrition rapidly emerges in wartime and during conflict situations.
Intrauterine growth retardation and low birth weight
Intrauterine growth retardation (IUGR) resulting in low birth weights constitutes a major public health problem in developing countries. The recent WHO Technical Report (1995) recommended that the 10th percentile of a sex-specific, birth weight-for-gestational-age distribution be designated for the classification of small-for-gestational-age infants (SGA). It is difficult to establish with certainty in all cases whether the reduced weight at birth is the result of in utero growth restriction. However, in populations in developing countries with a high incidence of SGA the likelihood is that this is largely the result of IUGR. Hence in this context, the definition of IUGR should be infants born at term (i.e. > 37 weeks of gestation) with a low birth weight (i.e. < 2500 g). The causes of IUGR are multiple and involve many different factors. The most important determinant of infant weight at birth is the maternal environment of which nutrition is the single most important factor. Poor maternal nutritional status at conception and inadequate maternal nutrition during pregnancy can result in IUGR. Short maternal stature, low maternal body weight and body mass index (BMI) at conception, and inadequate weight gain during pregnancy are factors that are associated with IUGR. In developing countries IUGR is closely related to conditions of poverty and chronic undernutrition of economically disadvantaged mothers. According to current WHO estimates (de Onis et al. 1998) 16.4 per cent of infants born in developing countries have birth weights below 2500 g of which 11.0 per cent are low birth weight due to IUGR. South Asia (i.e. the countries of the Indian subcontinent) has the highest incidence of low birth weights (< 2500 g) at 28.3 per cent, of which 20.9 per cent is attributable to IUGR.
Low birth weights and IUGR are associated with increased morbidity and mortality in infancy. It is estimated that term infants weighing less than 2500 g at birth have a four times increased risk of neonatal death as compared with infants weighing between 2500 and 3000 g and 10 times higher than those weighing between 3000 and 3500. The risk of morbidity and mortality in later infancy is also considerably high in these low birth weight infants. In developing countries this is largely due to increased risk of diarrhoeal disease and respiratory infections. Barker’s studies (1995) have consistently demonstrated a relationship between low birth weight and later adult disease and provide an important aetiological role for fetal undernutrition in amplifying the effect of risk factors in later life in the development of chronic diseases such as heart disease and diabetes mellitus in adult life.
Protein–energy malnutrition
The clinical conditions of childhood malnutrition are widely recognized as kwashiorkor, marasmus, and the mixed condition of marasmic kwashiorkor. These severe forms of malnutrition are, however, the tip of an iceberg of widespread mild and moderate childhood undernutrition within the community. The dominance of the three forms varies from country to country and this seems to reflect the weaning practices and staple foods of the area as well as the likelihood of recurrent infections. Kwashiorkor presents a variety of clinical features depending on the region of the world. Children are characteristically oedematous with a moon face, a scaling crazy-pavement pigmentation, and ulceration of the skin with sparse thin reddish hair. Clinically, they are morose and lethargic, and they have a large liver and often appreciable amounts of trunkal and limb fat, which obscures an atrophied muscle mass. The condition is associated with high mortality, is often accompanied by infections and was originally classified as the outcome of selective protein deficiency (Waterlow 1992). It can occur in epidemic form once a measles outbreak affects a society living on a low-protein staple diet (e.g. cassava, yam, plantain, or banana), but additional features may be the exposure to toxins such as aflatoxin. The impact of iron overload from the ready absorption of bacterial iron leading to marked free-radical damage in a liver, which is limited in free-radical scavengers derived from the diet (e.g. glutathione and vitamins E and C), also seems important (Briend and Golden 1993). The measles infection not only can induce diarrhoea with protein loss, but further protein can be lost through the skin as amino acids or transferred to boost lymphocyte and antibody production. Thus, a relative deficiency of amino acids for glutathione or protein production may still be important and the immune and inflammatory capacity of the malnourished child is impaired.
The marasmic form of protein–energy malnutrition is that of a wizened, shrivelled, growth-retarded, and skeletal child who is often alert and with normal-coloured but shrivelled skin. Mortality rates in marasmus are lower than in kwashiorkor but if overfed early with too high a sodium intake the marasmic child may become oedematous and simulate the mixed syndrome of marasmic kwashiorkor.
Diarrhoeal disease and malnutrition
Apart from these classic and extreme conditions there are millions of children with chronic diarrhoea who fail to respond completely to the usual treatment for acute gastroenteritis. The 5 million or more deaths per year from diarrhoeal diseases have been reduced by about half, but it is now becoming clear that the residual problem is essentially nutritional in origin. A vicious circle is established whereby an intestinal infection in a young child leads to anorexia, intestinal damage with malabsorption, and secretory diarrhoea, which then does not remit because the poor nutritional state of the child maintains the immunological deficit and this impairs the recovery of the intestine. These malnourished children with diarrhoea typically have more pronounced potassium depletion and are very sensitive to sodium retention.
Traditionally, children, particularly in Africa and Asia, fail to thrive once they have succumbed to an infectious disease, and they then languish, responding poorly to standard therapy and failing to grow even when presented with supposedly adequate amounts of food. New treatment schemes are now being developed based on the recognition that intestinal bacterial overgrowth is common and that for recovery to take place it is necessary to provide the full complement of minerals and vitamins (Golden et al. 1995). Once the child has re-equilibrated its fluid and electrolyte balance, the need is not only for the full supply of minerals and vitamins but also for adequate amounts of readily swallowed energy-dense feeds. This is often best met by adding oil, which is energy dense to the diet. The manufacture of oil-rich products for rehabilitating children in Third World countries is hampered by their tendency to oxidize, but locally produced food oils can make an extremely valuable contribution to increasing the energy density of the diet. New nutrient-to-energy ratios for the supplementary feeding of children and adults have been recently devised (Golden et al. 1995) and preliminary observations suggest that these new formulations can boost growth rates. Shifting strategies of rehabilitation by coping with malnourished children at home rather than in rehabilitation centres clearly show the cost–benefit advantages of home care to both mother and child (Khanum et al. 1994).
Assessment of undernutrition in children in the community
Undernutrition in childhood is characterized by growth failure, resulting in a body weight that is less than ideal for the child’s age. Hence, in children, assessment of growth has been the single most important measurement that best defines their health and nutritional status. Measures of height and weight are, therefore, the commonly used indicators of the nutritional status of the child. Classification of childhood malnutrition based on height, weight, and age thus continues to be the backbone of nutritional assessment methods for both population and individual assessments. The set of guidelines for expressing children’s nutritional status in a community has recently been revised by the World Health Organization (WHO 1995). There is increasing evidence that children throughout the world when well fed and free of infection tend to grow at similar rates whatever their ethnic or racial origin, and healthy children everywhere can, when fed appropriately, be expected to grow on average along the 50th percentile of a reference population’s weight and height for age. Table 1 summarizes the methods for assessing a child’s nutritional status using anthropometry by expressing both height and weight as standard deviations or Z-scores from the median reference value for the child’s age. Thus a normal range corresponds to the third and 97th percentile (i.e. ±2 SD or ±2 Z-scores). By expressing data in this way, using relatively simple computer programs, it is possible to express the weight and height data for all children across a wide age range in similar Z-score units and thereby produce a readily understandable comparison of the extent of growth retardation at different ages and in different societies.

Table 1 Diagnostic criteria for undernutrition in children and adults

A deficit in height is referred to as ‘stunting’ whereas a deficit in weight-for-height as ‘wasting’. These two measures are subsumed in the less helpful original designation of a child’s failure to grow in terms of weight-for-age. Clearly this measure includes both the wasting and stunting features but fails to distinguish the important differences between the two. Wasting can occur on a short-term basis in response to illness with anorexia or malabsorption or because the child goes hungry for several weeks. Therefore changes in weight-for-height reflect the impact of short-term changes in nutritional state. Growth in height, however, is much more of a cumulative index of long-term health because growth in length or height stops immediately a child develops an infection and the subsequent growth may be slow during the recovery period. Increasing evidence suggests that a child normally grows in spurts for reasons that remain obscure. Energy intake is not a crucial determinant of height as popularly considered. The energy cost of growth and weight gain is only 2 to 5 per cent of total energy intake once the child is 1 year of age. Impairment or slowing of height growth classically occurs in many communities at the time of weaning and up to about 2 years of age and affects a large proportion of children in many developing countries. Once the children have failed to maintain their proper growth trajectory they tend to remain on the lower percentile and ‘track’ at this low level for many years.
Surveys throughout the world (Table 2) demonstrate the very high prevalence of stunting in Asian children who begin to fall behind the reference height percentiles by 6 months of age. Their continued stunting may reflect not only their lower nutrient intakes from local diets but also the constant impact of infection, mostly from infected water and food. The dramatic effect of ridding a child of a gastrointestinal infection is shown by the remarkable spurt in height when Jamaican children with a trichuris worm infection are dewormed (Callender et al. 1993). Within 2 weeks of treatment, sensitive measures of skeletal growth demonstrate a surge in long bone formation. Thus, the stunting of children may prove to be the result of a complex interaction between the quality plus quantity of the child’s food intake and the burden of chronic low-grade intestinal, respiratory, and other infections. The persistence of infections is in part a reflection of the immune deficit in malnutrition, and deficiencies of many vitamins (e.g. vitamin A) and minerals (e.g. zinc) are clearly associated experimentally and clinically with an excess susceptibility to infection. The infection itself, however, places increased demands on nutrient requirements and often leads to anorexia (Stephenson et al. 1993); thus mechanistically nutrition and infection are intertwined as demonstrated several decades ago. Wasting and stunting are two types of nutritionally mediated growth deficits in childhood. Wasting indicated by a low weight for height can occur at any age. Stunting is seen when a child’s height is low for their age and reflects poor nutrition during the early growth period, which is often the result of infectious episodes during this period. Physical stunting is associated with poor mental development and socio-economic deprivation. Stunting is more common in South Asia and seems to be related to the high incidence in low birth weight (i.e. < 2500 g) in this region.

Table 2 Prevalence and numbers of children with malnutrition in developing countries of Latin America, Asia, and Africa

Cognitive and mental development and malnutrition
A largely neglected issue of immense societal and public health importance is the issue of brain development and learning in undernourished children in developing societies. Grantham-McGregor (1995) has demonstrated that children who are stunted and living in deprived circumstances in Jamaica develop poorly and have major deficits in intellectual and cognitive development and social behaviour. In a series of controlled studies she has shown that children’s scholastic ability in their teens can be strongly influenced by interventions in the second and third year of life. Children who are involved in simple parent-assisted play with the use of primitively constructed blocks, which can be assembled in a stack or sequence, learn early to improve their mental processing in non-verbal skills. By supplementing their diet with extra food rich in nutrients, the children’s mental development is again improved but the combination of extra food and mental stimulation has an effect in boosting mental development, which is so remarkable that the combined ‘therapy’ allows children almost to catch up with the mental development of those reared in a very advantageous environment. Furthermore, imposing this combined ‘therapy’ on children for a mere 1 year when they are 6 months to 3 years of age leads to a long-term gain discernible in children’s scholastic performance many years later.
The dependence of brain development on good nutrition has been recognized for decades with iodine deficiency in its extreme form leading to fetal retardation and the syndrome of cretinism. Even if cretinism is avoided, postnatal iodine deficiency can lead to such a slowing of mental processing that lethargy results with permanent impairment of mental development because of the need for adequate nutrition during the critical periods of brain development. Similarly, Pollitt (1991a, b) has now clearly demonstrated that anaemia induced by iron deficiency can permanently handicap children at a crucial time in their development even though iron deficiency per se is not enough to produce demonstrable deficiency. Experimentally, however, iron is required for the synthesis of many cerebral enzymes, so a biochemical basis for selective nutrition effects on mental processing is clear. Thus, selective nutrient deficiencies can lead to impaired brain function and this, therefore, raises the issue of whether food-deprived, stunted children in the Third World are capable of being rehabilitated with a complete diet or only need two to three nutrients. Grantham-McGregor’s (1995) studies show that food that stimulates longitudinal bone growth also stimulates brain development, thus implying a more generic demand for a range of nutrients if mental function is to improve.
These studies of childhood nutrition and brain development have profound public health significance. Until now one has hesitated to infer that many groups in Third World societies are mentally less able as a result of nutritional insults. Yet 20 to 60 per cent of the children living in Third World countries are physically stunted. Anaemia in childhood is extremely common and iodine deficiency threatens 10 per cent of the world’s children. In this context, therefore, we need to recognize that the human capital of Third World societies is limited by inadequate provision for the upbringing of children. This then becomes of huge economic and public health significance as adults who are slow and less able to adapt may become a societal burden rather than an asset. These examples illustrate the importance of food and nutrition in the development of human capital in developing societies of the Third World.
Iron deficiency
Iron deficiency is probably the most common nutritional deficiency disorder in the world. The highest prevalence figures for iron deficiency are found in infants, children, teenagers, and women of childbearing age. Hence it is a major public health problem with adverse consequences especially for women of reproductive age and for young children. The predominant cause of iron deficiency worldwide is nutritional, the diet failing to provide for the body’s requirements of iron. In tropical countries, intestinal parasitosis exacerbates iron deficiency by increasing the loss of blood from the gastrointestinal tract. The increase in malaria in these countries further contributes to the anaemia. A low intake of iron and/or its poor absorption then fails to meet the enhanced demands for iron and anaemia results.
The consequences of iron deficiency are numerous as iron plays a central part in the transport of oxygen in the body and is also essential in many enzyme systems. Not only does iron deficiency affect neurotransmitter systems in the brain with changes in behaviour, such as attention, memory, and learning in infants and small children, but iron deficiency also negatively influences the normal defence systems against infection. T-lymphocyte function, phagocytosis, and the killing of bacteria by neutrophilic leucocytes are affected. In pregnant women iron deficiency contributes to maternal morbidity and mortality, and increases risk of fetal morbidity, mortality, and low birth weight (Viteri 1997). Long-standing iron deficiency in general terms also results in a reduction in physical working capacity and productivity of adults both in agricultural and industrial work situations. These functional impairments are economically important.
Iron deficiency disorders encompass a range of body iron depletion states. The least severe is diminished iron stores diagnosed by decreased serum ferritin levels and are not usually associated with adverse physiological consequences. Iron deficiency without anaemia is severe enough to affect the production of haemoglobin without haemoglobin levels falling below clinical criteria indicative of anaemia. This condition is characterized by decreased transferrin saturation levels and increased erythrocyte protoporphyrin. The major clinical manifestation of iron deficiency is iron deficiency anaemia (IDA). IDA seems to be a particular problem of Africa and South and Southeast Asia (Table 3), and the dominant cause is nutritional iron deficiency. Even in industrialized, developed countries the prevalence of iron deficiency anaemia varies between 2 and 8 per cent. In Africa, Asia, and South America, the trend in availability of iron in diet has been deteriorating so it is not surprising that iron deficiency anaemia continues to be a massive public health problem throughout the world. The availability of iron in the diet for absorption is affected by both the form of iron and the nature of foods concurrently ingested. Iron exists in the diet in two forms: ‘haem iron’ it is found only in animal sources and is readily available for absorption and is not influenced by other constituents of the diet, and ‘inorganic iron’ is not readily available and is strongly influenced by factors present in foods ingested at the same time. Both animal foods and ascorbic acid (vitamin C) promote the absorption of inorganic iron. Diets that are primarily cereal and legume-based may contain much iron but, in the absence of co-factors such as ascorbic acid, they may provide only a low level of available iron. Concern about iron deficiency is an important nutritional reason for recommending the consumption of at least some meat as well as foods with a substantial content of ascorbic acid for populations who eat predominantly a cereal-based diet.

Table 3 Prevalence of anaemia and numbers affected (in millions) in different regions of the world

WHO is active in developing strategies to combat iron deficiency. These strategies include iron supplementation, iron fortification of certain foods, dietary modification to improve the bioavailability of dietary iron by modifying the composition of meals, and parasitic disease control. Iron and folate supplementation programmes for pregnant women are currently widely implemented in several countries; 49 countries have a universal preventive supplementation policy during pregnancy. Iron supplementation aimed at pre-school or school-aged children are being carried out in 23 countries. Fortification of foods with iron is a preventive measure aimed at improving and sustaining adequate iron nutrition over a longer term. Many industrialized countries such as Canada, the United Kingdom, and the United States have fortified foods with iron and more recently five large studies in developing countries have demonstrated the effectiveness of iron fortification of foods provided these programmes are based on careful planning and follow well established guidelines (Viteri 1997). Improvement in the supply, consumption, and the bioavailability of iron in food is an important strategy to improve the iron status of populations. The bioavailability of iron in foods is influenced by the composition of the meal and food preparation methods. The consumption of ascorbate-rich foods enhances iron absorption while limiting the content of phytate, which inhibits iron absorption, will improve iron bioavailability from the diet. Malaria and intestinal parasites (especially hookworm) are important contributors to IDA in endemic areas. In populations where hookworm is prevalent, effective treatment of this helminth infection has reduced IDA in school age children (Stoltzfus et al. 1997).
Iodine deficiency
The term ‘iodine deficiency disorder’ (IDD) refers to a complex of effects arising from iodine deficiency. The mountainous areas of the world are likely to be iodine deficient because the rain leaches the iodine from the rocks and soils. The most severely deficient areas are the Himalayas, the Andes, the European Alps, and the vast mountainous regions of China. Although iodine deficiency is likely to occur in all those elevated regions subject to glaciation and high rainfall that run off into rivers, it also occurs in flooded river valleys of eastern India, Bangladesh, and Burma. The Great Lakes basins of North America are also iodine deficient. Excessive intakes of goitrogens in food (due to the excessive consumption of cassava or the brassica group of vegetables) and in water (water-borne goitrogens in Latin America) as well as the deficiency of certain trace elements in the soil or food chain (such as selenium) may interfere with the uptake and metabolism of iodine in the body and can thus cause or amplify the effects of iodine deficiency.
The prevalence of IDD varies globally and at present is confined to developing countries, largely because public health initiatives such as iodization of salt have been introduced in the developed, industrialized Western world. Iodine deficiency and goitre are still prevalent, however, in central and eastern Europe. According to a recent WHO Report (1990), in developing countries alone about 1000 million (1 billion) are at risk of IDD, of which 200 million are suffering from goitre, 5 million have gross cretinism with mental retardation while another 15 million suffer from lesser degrees of mental defect. The estimated proportion of population at risk of IDD and their recent trends in the different regions of the world is summarized in Table 4.

Table 4 Proportion of total population at risk of iodine deficiency disorders by region in the 1990s

IDD in humans is predominantly the result of a primary deficiency of iodine in the diet. Both water and foods are sources of iodine with marine fish being the richest source of iodine. Milk and meat are also rich sources of iodine. Fruits, legumes, and vegetables as well as fresh water fish are also important additional sources. Plant foods are likely to show a reduced content of iodine if the iodine content of the soils in which they are grown is low. Goitrogens in the diet are of secondary importance as aetiological factors in IDD. More recently it has been shown that staple foods consumed largely by rural populations of developing societies, such as cassava, maize, sweet potatoes, lima beans, etc., contain cyanogenic glucosides that release a goitrogen thiocyanate. Cassava is now implicated as an important contributor to iodine deficiency and results in endemic goitre and cretinism in non-mountainous Zaire and in Sarawak in Malaysia. There is also increasing evidence that selenium deficiency in the soil can result in manifestations of IDD in the presence of modest iodine deficiency as selenium is an important trace element, which is essential for thyroid metabolism. Selenium deficiency is now increasingly recognized as an aetiological factor in the IDD in several regions of China.
Iodine is readily absorbed from the diet and forms a very important element in the synthesis of thyroid hormones in the body. Thyroid hormones are essential for normal growth and development. Just prior to birth, the levels of the biological active tri-iodothyronine (T3) increase and prepare the organism for the transition from intrauterine to extrauterine life. Failure to synthesize sufficient T3 as a result of iodine deficiency may be a factor in the stillbirths that occur as a part of the spectrum of IDD in humans. Thyroid hormone deficiency leads to severe retardation of growth and maturation of all organs. The brain is particularly susceptible to damage during the fetal and early postnatal periods. It is now confirmed that the thyroidal control of neonatal brain development is more important than fetal brain development as early and optimal thyroid hormone treatment after birth can lead to substantial improvement in thyroid function. The spectrum of IDDs in humans, from the fetus to the adult, have been outlined recently by Hetzel (1987).
The public health initiatives for correcting iodine deficiency require the provision of adequate iodine to the individual. This has been achieved by one of several methods:
Iodization of salt has been the most favoured method and has greatly reduced the prevalence of IDDs in Switzerland, the United States, Yugoslavia, and New Zealand. Since its first successful introduction in the 1920s in Switzerland (Burgi et al. 1990) successful programmes have been more recently reported in Central and South America, in Europe (Finland), and in Asia (China and Taiwan). However, several developing countries have encountered problems with their salt iodization programmes because it is difficult to produce and maintain enough high-quality iodized salt for large populations such as in India. The extra costs of iodized salt and problems with its availability and distribution to remote regions can also be a problem. These issues may be compounded by cultural prejudices about the use of iodized salt and the loss of iodine with cooking if salt is not added after cooking.
Iodized oil injections have been used to prevent goitre and cretinism in New Guinea (Pharoah and Connolly 1987). Iodized oil is suitable for mass programmes and can be carried out alongside mass immunization programmes. These methods have been successful in China, Indonesia, and Nepal. The major problems with iodized oil are the cost, the initial discomfort, and the likely potential disadvantage associated with the transmission of hepatitis B and human immunodeficiency virus with the use of needles. The need for a primary health care team to inject the iodized oil can be a further disadvantage.
Iodized oil by mouth may be an effective alternative and has been tried as a single oral dose for children in South America and Burma. Oral iodized oil has also been tried in women and primary health centres can readily administer this scheme. However, the effects of oral iodized oil seem to last for only half as long as a similar dose of injected iodized oil. They do not, however, suffer from the disadvantages of iodized oil injections so it is a preferred method for use in remote areas.
The IDDs are excellent examples of nutritional deficiency disorders of public health importance that can readily be abolished if mass community programmes are undertaken.
Vitamin A deficiency
Vitamin A deficiency leads to night blindness and xerosis (dryness) of the conjunctiva and cornea, disrupts the integrity of their surface and causes corneal clouding and ulceration, and may lead to blindness in children. Xerophthalmia continues to be a major cause of childhood blindness despite the intensive prevention programmes of the last two decades. It is a widespread problem and the parts of the world most seriously affected include South and East Asia, and many countries in Africa, Latin America, and the Near East (Fig. 1). The WHO (1991) estimated that between 6 and 7 million new cases of xerophthalmia occur every year, with about 1 in 10 of these children suffering corneal damage. Of these, 60 per cent are dead within a year and of the survivors 25 per cent are totally blind, whereas 50 to 60 per cent are partially blind. It is thus estimated that 3 million children under the age of 10 years are blind from deficiency of vitamin A at any one time. More recent estimates of the prevalence of clinical vitamin A deficiency indicates a considerable improvement with reduction of between 31 per cent and 57 per cent in prevalence in the various regions of the world (UNICEF 1997).

Fig. 1 The geographical distribution of xerophthalmia (vitamin A deficiency) in 1987. (Source: WHO Study Group 1990.)

An additional 20 to 40 million people suffer from mild or subclinical deficiency of vitamin A, which we now recognize as having serious consequences for survival as vitamin A deficiency (VAD) is now known to decrease the child’s resistance to infections and increase mortality. Even before eye signs of VAD are detectable, changes in the surface linings of the gastrointestinal and respiratory tracts occur along with changes in cell-mediated immunity and these can increase the risk of morbidity and even mortality associated with infections in children. More recent evidence suggests that VAD may be associated with increased maternal morbidity and even mortality. Vitamin A is also now known to be involved in fetal development, haematopoiesis, spermatogenesis, appetite, and physical growth.
Vitamin A is the parent of a class of compounds called retinoids. Provitamin A carotenoids, chiefly b-carotene, are also included in the vitamin A family. Preformed vitamin A is chiefly found in dairy products such as milk, butter, cheese, egg yolk, in some fatty fish, and in the livers of farm animals and fish. Carotenes are generally abundant in yellow fruits (papayas, mangoes, apricots, peaches) and vegetables (carrots). Absorption of vitamin A is about 80 per cent complete in the presence of an adequate fat intake, while the absorption of carotenoids is highly bile salt dependent. Vitamin A (retinol and retinoic acid) plays a very important part in the body in cellular development and differentiation. Retinol also plays a vital part in normal vision, particularly by the rods in the retina. Thus, one of the earliest manifestations of vitamin A deficiency is night blindness.
There is now increasing evidence that vitamin A supplements in deficient populations can reduce morbidity, mortality, and blindness. Xerophthalmia has become less prevalent in recent years in hyperendemic areas such as Indonesia and India. Intervention strategies that may have contributed to this include periodic megadose vitamin A supplementation either in the form of capsules, syrup, or as an injected dose. However, this method of intervention should only be intended as a short-term measure to save sight and lives on a large scale (West and Sommer 1985). The fortification of dietary items that are universally consumed (e.g. sugar) in Central America (Arroyave et al. 1981) and monosodium glutamate in Indonesia (Muhilal et al. 1988) have had a favourable impact on the vitamin A status of the whole population. The problems with food fortification are essentially logistical and technological. Food supplies from different regions of the world show limited vitamin A availability, but the problem is exacerbated by a tendency to withhold vegetables and fruits from children and from pregnant and lactating women for cultural and other reasons. In Asia, there is a particular problem because the average availability of vitamin A is less than that required by the population; the problem is also exacerbated by the maldistribution within the population of foods rich in vitamin A. Nutrition education is the only answer when vitamin A deficiency develops despite vegetable sources of the vitamin being in plentiful supply. These foods are not incorporated into the diets of young children and mothers because of either lack of knowledge or cultural biases. Nutrition education together with practical advice and help with growing cheap nutritious vegetables in home kitchen gardens may help eradicate vitamin A deficiency. Horticultural approaches are increasingly recognized for their effectiveness and potential sustainability in improving not only vitamin A status, but also micronutrient status generally. The importance of combining increased vitamin A levels in the food supply with nutrition education and appropriate social marketing that promotes consumption by vulnerable groups within populations cannot be underestimated. Economic development and poverty reduction programmes are likely to improve the socio-economic status and may indirectly contribute to reducing the problem of VAD.
So far this chapter has dealt with only some of the more important nutritionally determined deficiency disorders of public health importance. It is important to recognize that segments of populations in the world suffer from other nutritional disorders such as those due to the deficiency of fluoride, zinc, selenium, B group vitamins, and ascorbic acid. Some of these seem to occur during seasonal deficiencies in their availability and accompany famine and conflict situations when they are seen in refugee camps. It is important to recognize that apart from these specific situations, in all regions of the world there are still some populations affected by one or more of these deficiencies despite the significant advances that have been made in controlling nutritional deficiency disorders. In some regions of the world, largely the result of increasing population size, the numbers of undernourished are increasing even if the population prevalence is declining. In many there is shift in the severity of the deficiency diseases with decreasing numbers with severe deficiency and increasing numbers with mild to moderate deficiencies. For a majority of these countries there is still the need to pursue vigorous policies and targeted action to combat the various nutritional deficiency disorders as a part of the comprehensive health-oriented national food and nutrition policies.
Adult undernutrition
The nutritional world has concentrated for the last 50 years on the vulnerable groups in society (i.e. children, pregnant and nursing mothers, and the elderly) because they were considered to be particularly susceptible to nutritional deficiencies. Now, however, it is becoming apparent that the condition of adult undernutrition has been neglected and this may have profound significance for Third World development. One simple measure of undernutrition in adults is adult weight in relation to height. A United Nations Working Party recommended that BMI, defined as body weight (in kilograms) divided by the square of the height (in metres), may be the most suitable index (James et al. 1988). The choice of BMI as the likely objective index for the assessment of nutritional status of adults was based on the observation that BMI was consistently highly correlated with body weight (a proxy for the available energy stores within the body) and was relatively independent of the stature or height of the individual. Adults with a BMI <18.5 are considered to be chronically undernourished and the same BMI cut-offs apply to both males and females (Table 1). These criteria for different degrees of undernutrition, based on BMI, are not only a sensitive index of adult nutritional status but also allow variations in weight in relation to socio-economic status, dietary intakes and seasonal fluctuations in the availability of food in the community to be demonstrated (Shetty and James 1994). Below a BMI of 17.0 considerable impairment of physical well being and exercise capacity are evident with individuals becoming more lethargic and susceptible to illness. The ability to promote and sustain effective agricultural productivity, particularly in rural societies, may therefore be limited by the vigour and well being of the adults on whom the vulnerable in society depend. Thus, it is now important to consider adult undernutrition as well as the problems of childhood.
These anthropometric measures of adult undernutrition as well as childhood malnutrition now provide an opportunity for objective measures of the prevalence of undernutrition worldwide. In practice, children and adults may adapt to a shortage of food by reducing their physical activity without changing their body weight. Thus, measures of the prevalence of low weight-for-height provide only a minimum index of underfeeding; physical activity is fundamental to children’s play and exploration and therefore to their mental development. Similarly, in adults physical activity is desirable not only for physiological well-being and for limiting the development of chronic disease but also to allow societies to prosper through physically demanding economic activity and those sound developments that rely on an energetic and enterprising population.
For many years the United Nations has attempted to assess the global prevalence of undernutrition by relating complex measures of food supply and its variable distribution between households with estimates of the population’s energy needs. The total number of individuals afflicted with inadequate energy intakes in 87 developing countries of Africa, Latin America, Asia, and the Far East was estimated as 512 million in 1983 to 1985 or 21.5 per cent of the population (FAO 1995). However, a comparison of the numbers of undernourished in Africa alone varies from between 70 million (19 per cent of the population) and 99 million (i.e. 25 per cent of the population) depending on the criteria selected and the cut-off point chosen for adult undernutrition by the same Food and Agricultural Organization, Rome (FAO). This was the result of the FAO estimating adult undernutrition for its Fifth World Food Survey (FAO 1985) as those consuming less than either 1.4 or 1.2 times their basal energy needs. Yet it is known that 1.55 times the basal needs is needed for an individual whose lifestyle requires only light physical activity. Using a more generous criterion for undernutrition, namely ‘not enough calories for an active working life’, the World Bank arrived at a staggering estimate of 730 million or 34 per cent of the population: they supposed that everybody should be moderately physically active. Reliable global estimates of adult undernutrition are now based on objective anthropometric indicators. However, anthropometric surveys of the nutritional status of adults with objective assessments of the numbers of undernourished adults are rare because the issue of adult undernutrition in the developing regions of the world has also largely been ignored until recently. Table 5 provides some data compiled recently by the FAO for some countries in Asia, Africa, and South America based on the use of BMI (Shetty and James 1994).

Table 5 Percentage distribution of undernourished adult population

Public health initiatives that deal with this malnutrition worldwide have to recognize that the basic causes of malnutrition are clearly political and socio-economic factors. Revolutions in the twentieth century in Russia and China resulted in great improvements in food supplies for the whole population. Agricultural revolutions such as the Green Revolution have also increased food availability and helped meet the food needs of the population. However, poverty is often the basis of a failure to obtain food even when it is available. A recent review by the World Bank (Reutlinger 1982) has emphasized that the most important determinants of hunger in developing countries are personal levels of income and the prices individuals must pay for food. Accelerated food production will alleviate hunger only to the extent that the scarce resources used in the process reduce poverty and lower food prices more than they would if used in other ways. Thus food entitlement decline is a more important force in sustaining poverty and undernutrition than a decline in the availability of food in poor developing societies.
Diet and chronic non-communicable diseases
The evidence relating diet to chronic non-communicable diseases such as cardiovascular disease, NIDDM, and cancers comes from population-based epidemiological investigations and from controlled trials. Animal experiments and in vitro tests on tissues have also contributed to our understanding of the relationship between diet and disease. Descriptive population-based epidemiological investigations yield valuable data that lead to important hypotheses, but they cannot be used alone to establish the causal links between diet and disease. The most consistent correlation between diet and chronic diseases have emerged from comparisons of populations or segments of population with substantially different dietary habits. Analytical epidemiological studies, such as cohort studies and case–control studies, that compare information from groups of individuals within a population usually provide more accurate stimates of such associations. It is important to recognize when examining population-based epidemiological data relating diet to disease that every population consists of individuals who vary in their susceptibility to each disease. Part of this difference in susceptibility is genetic. As the diet within a population changes in the direction that measures the risk of the specific disease, an increasing proportion of individuals, particularly those most susceptible to the risk, develop the disease. As a result of this interindividual variability in the interaction of diet with an individual’s genetic make-up and therefore the individual’s susceptibility to disease, some diet–disease relationships are difficult to identify within a single population. In experimental clinical studies and controlled trials, long exposures may be required for the effect of the diet as a risk factor to be manifest. Strict inclusion criteria for participants may have to be adopted in order to show the effect with small numbers in a reasonable length of time. These in turn may restrict the study to homogeneous samples and thus may limit the applicability of results to the population at large. Despite these limitations, when carefully designed studies show repeated and consistent findings of an association between specific dietary factors and a chronic disease, they generally indicate a cause-and-effect relationship.
Diet and cardiovascular diseases
The most common cardiovascular diseases that are diet-related are CHD and hypertension.
Coronary heart disease
CHD emerged as a burgeoning public health problem in Europe and North America after the Second World War and by the end of the 1950s had become the single major cause of adult death. Although the earliest observation relating diet, plasma cholesterol level, and CHD was made in 1916 it was the approximately fivefold difference in CHD rates among the various developed countries and the intrapopulation variations in rates, by socio-economic class, ethnicity, and geographical location that brought to our attention the dietary basis of CHD. The marked changes in CHD rates in migrant populations that moved across a geographical gradient in CHD risk provided further evidence of the environmental nature of the causative factor, i.e. a change in diet and lifestyle. As the evidence began to mount, the WHO Expert Committee on Prevention of CHD (1982) concluded, after reviewing the existing knowledge, that the data on the relationship between blood cholesterol and the risk of CHD and the relationship of lipids in the diet and blood met the criteria for an epidemiological association to be termed causal. These data were, of course, backed by a plethora of intervention trials in volunteers, clinical studies, and a wide range of animal experiments demonstrating the effects of diet on coronary artery atherosclerosis.
The relationship between dietary factors and CHD was supported by the results of the Seven Country Study (Keys 1980). The saturated fat intake varied between 3 per cent total energy in Japan and 22 per cent in Eastern Finland while the 15-year CHD incidence rates varied between 144 per 10 000 in Japan and 1202 per 10 000 in eastern Finland. The annual incidence of CHD among 40 to 59-year-old men initially free of CHD was 15 per 100 000 in Japan and 198 per 100 000 in Finland (Keys 1980). Measurement of food consumption by the people in 16 well-defined cohorts in seven countries and its correlation with 10-year incidence rate of CHD deaths provided further support for this causal association. The strongest correlation was noted between CHD and the percentage of energy derived from saturated fat (Fig. ), whereas total fat was not significantly correlated with CHD.

Fig. 2 (a) The association between the saturated fat content of food supply and CHD. More than one data point is shown for some countries because data from more than one year have been used to construct the diagram. (b) The association between total food supply fat content and breast cancer. (Source: WHO Study Group 1990.)

In the Seven Country Study, the serum total cholesterol values were 165 mg/dl in Japan and 270 mg/dl in eastern Finland, and suggested that the variation in serum total cholesterol levels between populations could be largely explained by differences in saturated fat intake and CHD incidence. On a population basis, the risk of CHD seems to rise progressively within the same population with increases in plasma total cholesterol (Fig. 3). Observational studies suggest that one population with an average total cholesterol level 10 per cent lower than another will have one-third less CHD and a 30 per cent difference in total cholesterol predicts a fourfold difference in CHD (WHO Study Group 1990). The Seven Country Study showed a strong positive relationship between saturated fat intake and total cholesterol level; populations with an average saturated fat intake between 3 per cent and 10 per cent of the energy intake were characterized by serum total cholesterol levels below 200 mg/dl and by low mortality rates from CHD. As saturated fat intakes increased to greater than 10 per cent of energy intake a marked and progressive increase in CHD mortality was noticed.

Fig. 3 Within-population relationship between plasma cholesterol and CHD mortality, and between plasma cholesterol and total mortality. (Source: Martin et al. 1986.)

Several prospective studies have shown an inverse relation between high-density lipoprotein cholesterol (HDL) and CHD incidence. However, HDL cholesterol levels are influenced by several non-dietary factors and HDL levels do not contribute to explain differences in CHD mortality between populations; dietary influences on HDL levels are poorly recognized but include the terpenes, such as menthol, found in spices. Their influence on population differences in HDL levels seems, however, to be small. HDL levels are increased by alcohol, slimming, and by physical activity. The role of different unsaturated fatty acids (e.g. monounsaturated and n-3 and n-6 polyunsaturated fatty acids) and their role in the prevention of CHD are unclear. Populations who have high intakes of mono-unsaturated fatty acids (from olive oil) or have diets rich in n-3 polyunsaturates of marine origin (such as Eskimos) also have low CHD rates. There is emerging evidence that some isomers of fatty acids, such as trans-fatty acids may contribute to increasing the incidence of CHD by increasing low-density lipoprotein (LDL) cholesterol levels, by interfering with essential fatty acid metabolism, and by enhancing the concentrations of the lipoprotein Lp(a), which, in genetically susceptible people, seems to be an additional risk factor through mechanisms that include an antiplasminogen effect to limit fibrinolysis.
Other dietary components (e.g. dietary fibre or complex carbohydrates) seem to influence serum cholesterol levels and the incidence of CHD through complex mechanisms. Population subgroups consuming diets rich in plant foods with a high content of complex carbohydrates have lower rates of CHD; vegetarians have a 30 per cent lower rate of CHD mortality than non-vegetarians and their serum cholesterol levels are significantly lower than that of lacto-ovo-vegetarians and non-vegetarians. Alcohol consumption also reduces the incidence of CHD. A number of observational studies suggest that light-to-moderate drinkers have a slightly lower risk of CHD than abstainers. However, the relationship between alcohol intake and CHD is complicated by changes in blood pressure and also the nature of the alcoholic drink. The presence of phenolic compounds in red wine may contribute to the benefits of drinking red wine as compared with alcohol consumption per se in reducing the incidence of CHD.
Of the various risk factors shown in Fig. 4, the risk of CHD in individuals is dominated by three major factors: high serum total cholesterol, high blood pressure, and cigarette smoking (WHO 1982). There is also a synergism between risk factors, with the Japanese notable for their high smoking rates and hypertension but very low cholesterol levels; smoking and hypertension are particularly dangerous in societies and individuals with high cholesterol levels. Body-weight changes induced by dietary and lifestyle changes (e.g. in levels of physical activity) are strongly related to changes in serum total cholesterol, blood pressure, and obesity. Obesity in turn is strongly related to diabetes mellitus, both of which are risk factors for CHD.

Fig. 4 The multiple dietary factors responsible for the pathophysiological changes leading to CHD. (Source: Scottish Office Home and Health Department 1993.)

There is now general agreement on the strategies that need to be adopted to reduce both the frequency and extent of the risk factors of CHD. The nutritional approach is aimed at reducing obesity, lowering blood pressure, lowering total and LDL cholesterol, and increasing HDL cholesterol. It is possible to adopt a population strategy recommending a range of dietary principles that are likely to facilitate attaining one or more of these objectives. Current recommendations take into consideration both the entire spectrum of cardiovascular risks, including effects on thrombosis as well as providing a holistic approach to recommending a healthy diet that will reduce all chronic, non-communicable diseases, including cancers. These recommendations include lowering total fat intake to between 30 and 35 per cent of total calories, restricting saturated fat intake to a maximum of 10 per cent of total calories, and to increase intakes of complex carbohydrates or dietary fibre. Translated into food components this would mean reducing in particular animal fat intake and intake from hydrogenated and hardened vegetable oils and increasing the consumption of cereals, vegetables, and fruits.
Considerable controversy continues about the advisability of reducing cholesterol levels either in populations or in individuals, and much harm has been done by public health specialists and cardiologists who adopt a public stance questioning the benefits and highlighting the dangers of dietary change. Yet repeated expert government and WHO reports have consistently advocated a change in Western societies. Law et al. (1994) have recently undertaken a further set of meticulous analyses and suggested that dietary change is beneficial even for people in their seventh and eighth decades. Much of the confusion arises because drug trials of cholesterol lowering have been included and this confuses the picture because of increased rates of non-cardiovascular deaths linked to the drug used.
Hypertension and stroke
The risk of CHD and stroke increases progressively throughout the observed range of blood pressure (Fig. 5) based on nine major studies conducted in a number of different countries (MacMohan et al. 1990). From the combined data it appears that there is a fivefold difference in CHD and a 10-fold risk of stroke over a range of diastolic blood pressure of only 40 mmHg. Appropriate statistical analysis indicates that a sustained difference of only 7.5 mmHg in diastolic blood pressure confers a 28 per cent difference in risk of CHD and a 44 per cent difference in risk of stroke.

Fig. 5 Association between the usual diastolic blood pressure and the risk of stroke and CHD. The size of the boxes is proportional to the amount of information in each DBP category. The vertical bars denote 95 per cent confidence limits. Values of the mean usual DBP were estimated from later measurements in the Framingham Study. (Source: MacMahon et al. 1990.)

Obesity and alcohol intake are related to hypertension as weight reduction and restricting alcohol intake can lower blood pressures. The dietary factors that are implicated (in addition to alcohol and caffeine intakes) are excessive sodium and saturated fat intake and low potassium and calcium intake. The role of dietary sodium in hypertension has been a subject of considerable debate. A critical review of 27 published studies concluded that there was the relationship between intakes of salt and the prevalence of hypertension (Glieberman 1973). However, in the majority of studies the methods for assessing both dietary sodium and blood pressure were inadequate. The Intersalt study (Intersalt Cooperative Research Group 1988) compared standardized blood pressure measurements with 24-h urinary sodium excretion in 10 000 individuals aged 20 to 59 years in 32 countries and showed that populations with very low sodium excretion (implying low sodium intakes) had low median blood pressures, a low prevalence of hypertension and no increase in blood pressure with age. Although sodium intake was related to blood pressure levels and also influenced the extent to which blood pressures increased with age, the overall association between sodium, median blood pressure, and the prevalence of hypertension was less than significant.
A number of explanations have been put forward to explain why meticulous studies such as the Intersalt study (Intersalt Cooperative Research Group 1988) underestimate the relationship between dietary sodium and blood pressure. These include among others: unreliability of assessing dietary intake of sodium accurately; genetic variability; the contribution of other factors such as obesity and alcohol intake. Recent meta-analysis of published studies have correlated blood pressure recordings in individuals with measurements of their 24-h sodium intake (Law et al. 1991) and also seem to suggest that this association increases with age and with the initial blood pressure. The results of intervention trials of sodium restriction also tend to support this relationship. Aggregation of the results of 68 cross-over trials and 10 random control trials of dietary salt reduction have shown that moderate dietary salt reduction over a period of a few weeks lowers systolic and diastolic blood pressure in those individuals with high blood pressure (Law et al. 1991). It was estimated that such reductions in salt intake by population in Western countries would reduce the incidence of stroke by 26 per cent and that of CHD by 15 per cent. Reduction of the amount of salt in processed food would lower blood pressure even further and would prevent as many as 70 000 deaths per year in the United Kingdom. Results of therapeutic trials of drug therapy also support the fact that the incidence of stroke can be reduced if blood pressure is lowered, although the beneficial effect of lowering the incidence of CHD is lower than expected.
The other dietary component that has been investigated by the Intersalt Study (1988) has been potassium. Urinary potassium excretion, an assumed indicator of intake, was negatively related to blood pressure as was the urinary sodium-to-potassium concentration ratio. It has also been observed that potassium supplementation reduces blood pressure in both normotensive and hypertensive subjects (Cappucio and MacGregor 1991). Some, but not all, cross-sectional and intervention studies suggest a beneficial effect of calcium intake on blood pressure. Epidemiological studies also consistently suggest lower blood pressures among vegetarians than non-vegetarians independent of age and body weight. These studies may also support the role of other dietary components, i.e. vegetarian diets rich in complex carbohydrates are also rich in potassium and other minerals while some unknown components in animal products, possibly protein or fat, may influence blood pressure adversely in well-nourished populations.
Nutritional intervention is likely to reduce the occurrence of hypertension and the consequent complications of stroke and CHD in the community, as vividly demonstrated in Finland where the average blood pressure has fallen by nearly 10 mmHg and the prevalence of hypertension is only a quarter of what it was. In association with population falls in average cholesterol levels, CHD and stroke rates in Finland have fallen dramatically as the population’s diet was transformed to change its fat content and to more than double the average vegetable and fruit intakes. The decline in CHD and stroke rates was predominantly dependent on the fall in cholesterol and blood pressure levels, respectively, and these changes occurred despite increasing obesity rates.
Diet and cancers
Although the relationships between specific dietary components and cancers are less well established than that between diet and cardiovascular disease, the overall impact of diet on cancer incidence appears to be significant. It is now widely accepted that one-third of human cancers could relate directly to some dietary component (Doll and Peto 1981) and it is probable that diet plays an important part in influencing the permissive role of carcinogens on the development of many cancers. Thus up to 80 per cent of all cancers may have a link with nutrition.
Evidence that diet is a determinant of cancer risk comes from several sources. These include correlation between national and regional food consumption data and the incidence of cancers in the population. Studies on the changing rates of cancer in populations as they migrate from a region or country of one dietary culture to another have contributed to many important hypotheses. Case–control studies of the dietary habits of individuals with and without a cancer and prospective studies as well as intervention studies have provided evidence for the effects of diet on cancer.
The section below summarizes only those cancers where the role of diet or a nutrient is reasonably well established. Many other cancers in which diet may have a possible role have not been discussed at length as the aim is not to make this section exhaustive and all inclusive.
Cancers of the gastrointestinal tract may be influenced by the diet. The intake of alcohol appears to be an independent risk factor for oral, pharyngeal, and oesophageal cancer. Consumption of salted fish, preserved, and fermented foods containing nitrosamines as weaning foods or from early childhood may introduce a substantial risk of nasopharyngeal cancer. Several studies have demonstrated a positive association between oesophageal cancer and several dietary factors, which include low intakes of vitamins A and C, riboflavin, nicotinic acid, calcium, and zinc. In dietary terms the associations are with low intakes of lentils, green vegetables, and fresh fruits. Like nasopharyngeal cancers, the risk of oesophageal cancers is also positively related to increased intakes of highly salted foods and fermented, mouldy foods containing N-nitroso compounds. Stomach cancer is also associated with diets comprising large amounts of smoked and salt-preserved foods, which may contain precursors of nitrosamines, and low levels of fresh fruit and vegetables, which may contain nutrients that possibly inhibit the formation of nitrosamines. Colon cancer is the third most common form of cancer and the incidence rates are high in western Europe and North America, whereas they are low in sub-Saharan Africa (Boyle et al. 1985). Almost all the specific risk factors of colon cancer are of dietary origin. International comparisons indicate that diets low in dietary fibre or complex carbohydrates and high in animal fat and animal protein increase the risk of colon cancer. The original hypothesis of the protective effect of dietary fibre was based on clinical observations. In Southern Africa, for instance, the consumption of large amounts of plant-based foods was associated with the large faecal weight and the virtual absence of large bowel cancer. A hypothesis was then proposed that suggested that increasing intakes of dietary fibre increased faecal bulk and reduced transit time. Later, it has been argued that this mechanism may not be relevant to colorectal carcinogenesis (Kritchevsky 1986). The epidemiological data relating dietary fibre intakes to colorectal cancer are largely equivocal despite the demonstration by several studies of the existence of an inverse relationship between the intake of foods that are rich in dietary fibre and colon cancer risk. The large majority of studies in humans have found no protective effect of fibre from cereals but have found a protective effect of fibre from vegetables and possibly fruits (Willett 1989). Diets rich in fibre are also rich sources of nutrients such as antioxidant vitamins and minerals with potential cancer inhibiting properties. Vegetarian diets seem to provide a protective effect from the risk of colon cancer and the effects may be mediated by intakes of vitamin A and its precursor b-carotene.
Epidemiological studies consistently show that fat intake is positively related to colorectal cancer risk. Energy intakes also seem to be consistently higher in cases with colorectal cancer than in comparison groups and many studies have hitherto failed to show an energy-independent effect of fat intake. More recently Willett et al. (1990) have shown, after adjusting for total energy intake, that the consumption of animal fat was found to be associated with increased colon cancer risk. They also demonstrated that the trend in risk was highly significant when the relative risk of different quintiles of fat intake was related to the risk of large bowel cancer. The risk was not, however, associated with vegetable fat but with the consumption of saturated animal fat. The study by Willett et al. (1990) thus provided good epidemiological evidence relating the consumption of saturated fat and meat intake as risk factors for colon cancer. Recent case–control studies of Chinese people in North America and China have confirmed that increasing total energy intake and specifically from saturated fat increases the risk of colon cancer (Whittemore et al. 1990).
In summary, dietary factors seem to be important determinants of colon cancer risk. An effect of saturated fat intake appears to exist independently of energy intake. Meat intake also increases risk; an effect that may not be independent of saturated fat intake, although it is likely that products of cooking meat may be mutagenic. Dietary fibre from vegetable sources may also be protective and reduce risk, whereas the possible protection from the consumption of cereal fibres seems to be uncertain.
Primary liver cancers have been correlated worldwide with mycotoxin (aflatoxin) contamination of foodstuffs. The primary causal factor for lung cancer, a leading cause of death among men, is cigarette smoking. However, several studies have shown an interactive effect between cigarette smoking and low frequency of intake of green and yellow vegetables rich in b-carotene. In prospective studies, the frequency of the consumption of foods rich in b-carotene has been inversely associated with lung cancer risk. Dietary fats and dietary cholesterol have also been positively associated with lung cancer risk.
Breast cancer is a common cause of death among women both in the United States and in the United Kingdom. Correlation studies have provided evidence of a direct association between breast cancer mortality and the intake of calories and dietary fat, and specific sources of dietary fat such as milk and meat. The evidence for the role of dietary factors in the causation of female breast cancer has been gleaned both from animal experiment studies and a range of epidemiological studies. Cross-cultural ecological studies have provided evidence of the association of fat intake with breast cancer risk (Fig. 2). Neither case–control studies nor prospective studies reported to date provide unequivocal support for the association between fat intake and risk of breast cancer in postmenopausal women. The most recent analysis has found no association between fat intake and intake of dietary fibre and breast cancer risk in either pre- or postmenopausal women (Willett et al. 1992). Other dietary factors that may be implicated are vegetable consumption, which may be protective, whereas a modest increase in risk has been consistently seen with increased alcohol intake in women.
International incidence and mortality data generally show a positive correlation of prostate cancer with the incidence of other diet-related cancers. Inter- and intracountry analyses show a positive correlation with total fat intake whereas vitamin A and b-carotene emerge as protective factors. The strength of association between selected foods, food processing, and nutrients in the diet and cancers is summarized in Table 6. The evidence suggests that a high intake of total fat in the diet, more specifically the intake of saturated fats of animal origin are associated with an increased risk of cancers of the colon and breast. Diets high in plant foods, especially green and yellow vegetables and fruits, are strongly associated with a lower incidence of a wide range of cancers, including lung, colon, oesophagus, and stomach. Such diets tend to be low in saturated fat, high in complex carbohydrate and fibre, and rich in several antioxidant vitamins, including vitamin A and b-carotene. Sustained and consistent intake of alcohol is also associated with cancers, in particular those of the upper alimentary tract. Dietary factors thus seem to be important in the causation of cancers at many sites and dietary modifications may reduce cancer risk. However, at present, it is difficult to quantify the contribution of diet to total cancer incidence and mortality.

Table 6 Associations of foods, food processing and nutrients on risk of cancer

Although the associations between diet and risk of cancers at various sites are not always conclusive, and the underlying mechanisms poorly understood, the available current evidence can be used to initiate public health strategies to prevent cancer. It seems quite clear that there is a link between fat, particularly saturated fat intake and colon cancer. This is a consistent finding and it can be concluded that if a population were to reduce its intake of fat (both total and saturated), then a reduction in colon cancer, and possibly breast cancer, can be expected. The recommendation is to lower total fat intake to between 30 and 35 per cent of total energy—a recommendation that is consistent with dietary goals to reduce cardiovascular risk. Another likely preventive measure is to increase the consumption of fruits and vegetables. Innumerable studies have helped conclude that the consumption of higher levels of vegetables and fruits is associated consistently with the reduced risk of cancer in most sites. Vegetables and fruits are exceptional sources of a large number of potentially anticarcinogenic agents including vitamin A and carotenoids, vitamins C and E, selenium, dietary fibre, and a whole range of chemical compounds such as flavonoids, phenols, and plant sterols. Much of their protective effect is unknown or poorly understood. The WHO and the more recent World Cancer Research Fund report (1997) recommends intakes of 400 g/day of fruit and vegetables (excluding potatoes) as a reasonable dietary goal to aim for.
Diet, lifestyles, and obesity
Obesity is one of the most important public health problems and the prevalence of obesity is increasing in the developed, industrialized world. Table 8 summarizes the prevalence rates of obesity among the adults of several countries in the world. Even in developing countries of the Third World, relatively affluent and urbanized communities are showing a rapidly increasing prevalence of obesity among adults.

Table 8 Prevalence rates of obesity (BMI > 30.0) among adults

Being overweight and obese is normally assumed to indicate an excess of body fat. Like adult undernutrition, BMI is used as an indicator of choice to diagnose obesity in adults and Table 7 outlines the diagnostic criteria for overweight and obese infants, children, adolescents, and adults (WHO 1995, 1998). Recent recommendations of the WHO consultation include the suggestion that a BMI of between 18.5 and 24.9 in adults be considered as the appropriate weight for height. A BMI between 25 and 29.9 is indicative of being overweight and possibly a pre-obese state, whereas obesity is diagnosed as BMI >30.0. The main health risk of obesity is premature death due to heart disease and hypertension and other chronic diseases. In the presence of other risk factors (both dietary and non-dietary), obesity increases the risk of CHD, hypertension, and stroke. In women, obesity seems to be one of the best predictors of cardiovascular disease. Longitudinal studies have demonstrated that weight gain, both in men and women, is significantly related to increases in cardiovascular risk factors. Weight gain was strongly associated with increased blood pressure, elevated plasma cholesterol and triglycerides, and hyperglycaemia (fasting and postprandial). The distribution of fat in the body in obesity may also contribute to increased risk. Swedish studies have shown that high waist-to-hip ratios (i.e. fat predominantly in the abdomen and not subcutaneous) increase the risk of heart disease and diabetes, although more recent follow-up over the longer term has not always supported this claim. The coexistence of diabetes among the obese is also an important contributor to morbidity and mortality in obese individuals. Obesity also carries an increased risk of gallbladder stones, breast and uterine cancer in females, and possibly prostate and renal cancer in males. Body-weight increase (with increased BMI) is also associated with increasing mortality in both smokers and non-smokers.

Table 7 Diagnostic criteria for overweight and obesity in infants, children, adolescents, and adults

Several environmental factors, both dietary and lifestyle related, contribute to increase obesity in communities. Social and environmental factors that either increase energy intake and/or reduce physical activity are of primary interest. Changes in the environment that affect the levels of physical activity among children and adults may contribute to the development of obesity. Changes both in the food consumed and the changes in the patterns of eating behaviour may contribute to increased intakes of energy well beyond one’s requirements. Increased intake of dietary fat provides energy-dense food, which may not contribute to the efficient regulation of appetite and food intake. Excess fat is more readily stored while fibre-rich complex carbohydrates tend to bulk the meal and limit intakes. International and national comparisons reveal that obesity increases as the fat percentage of calories in the diet increases (Fig. 6). Patterns of eating, particularly snacking between meals and frequent snacks, may contribute to increased intakes. However, overwhelming evidence seems to support the view that much of the energy imbalance that is responsible for the epidemic of obesity in modern societies is largely the result of dramatic reductions in physical activity levels (both occupational and leisure time) when food availability is more than adequate.

Fig. 6 Percentage of standard adult body weights in relation to national fat consumption figures in 10 countries: 1, Uruguay; 2, Venezuela; 3, Costa Rica; 4, Nicaragua; 5, Honduras; 6, Panama; 7, Malaya; 8, Guatemala; 9, El Salvador; 10, East Pakistan. (Source: Lissner and Heitmann 1995.)

Preventive measures to deal with the increasing prevalence of obesity worldwide has to start very early. Primary preventions may have to be aimed at primary school children. This includes nutrition education of children and parents and dealing with problems of school meals, snacking, levels of physical activity, and other issues. Public health initiatives should include attempts to tackle all social and environmental issues that may contribute to the increasing energy and fat intakes and the reducing physical activity levels. As the issues are too complex, attempts have to be made to address issues related to transportation policies, work site facilities for exercise, and several other factors that contribute to this complex problem.
Non-insulin-dependent diabetes mellitus
NIDDM is a chronic metabolic disorder that occurs in middle adulthood and is strongly associated with an increased risk of CHD. NIDDM has to be distinguished from insulin-dependent diabetes as well as from gestational diabetes of pregnancy. Obesity is a major risk factor for the occurrence of NIDDM; the risk being related both to the duration and the degree of obesity. The occurrence of NIDDM in a community appears to be triggered by a number of environmental factors such as sedentary lifestyle, dietary factors, stress, urbanization, and socio-economic factors. Certain ethnic or racial groups seem to have a higher incidence of NIDDM; these include Pima Indians, Nauruans, and South Asians (i.e. Indians, Pakistanis, and Bangladeshis). NIDDM also seems to occur when the food ecosystem rapidly changes, for example urbanization of Australian Aborigines or the adoption of Western dietary patterns by Pima Indians.
The cause of NIDDM is unclear, but it seems to involve both an impaired pancreatic secretion of insulin and the development of tissue resistance to insulin. Excess weight and obesity, particularly the central or truncal distribution of fat accompanied by a high waist-to-hip ratio and a high waist circumference seems to be invariably present with NIDDM. Hence the most rational and promising approach to preventing NIDDM is to prevent obesity. Weight control is hence of fundamental importance in both a population strategy for the primary prevention of this disorder but is also essential to tackle high-risk individuals in this group. Physical activity also helps improve glucose tolerance by weight reduction and also by its beneficial effects on insulin resistance. Diets high in plant foods are associated with a lower incidence of diabetes mellitus. Vegetarians have a substantially lower risk than non-vegetarians of having NIDDM.
Expert groups have provided dietary recommendations for both the primary prevention of NIDDM, the management of diabetes and the reduction of secondary complications, which include CHD risk and renal, ocular, and neurological complications of diabetes. Prevention of weight gain and reduction of obesity is the key to minimizing the prevalence of NIDDM and its attendant complications and risks. Increasing levels of physical activity also help. The specific dietary recommendations include providing diets with carbohydrates providing 55 to 60 per cent of energy, maximizing content of complex carbohydrates and dietary fibre (maximize to 40 g of fibre) and reduction of simple sugar intakes. In addition, the general recommendations for fat (total fat more than 30 per cent calories, and saturated fat less than 10 per cent of calories) are also emphasized due to the associated high risk of CHD in individuals with NIDDM. The factor of prime importance is to achieve and maintain a desirable body weight and, if possible, to prevent weight gain in the first place.
Diet and osteoporosis
The increase in numbers of elderly in the populations of the developed world has seen an increase in health problems of the elderly, which affects their quality of life. Fracture of the hip is an important health problem, particularly among postmenopausal women. Fractures occur in the elderly following what appear as relatively trivial falls when there is osteoporosis and the density of the bone is reduced. Bone density increases in childhood and adolescence and reaches a peak at about 20 years of age. Bone density falls from menopause in women and from about the age of 55 years in men. The variation in bone density between individuals is large and of the order of ±20 per cent. As bone density declines with increasing age, those that attain high levels of peak bone mass at the end of adolescence and retain higher levels of bone density during adulthood become osteoporotic with advancing age much more slowly than those with lower bone densities to start with. Hence the range of factors that influence the attainment of peak bone density may play a crucial part in the development of osteoporosis and the occurrence of fractures as age advances.
Several factors determine the onset of osteoporosis; they include the lack of oestrogen in postmenopausal women, degree of mobility, smoking, and alcohol intake. Calcium intake is a likely dietary determinant that may contribute to the onset and degree of osteoporosis. Evidence from some countries, such as Yugoslavia, tends to indicate that the osteoporosis of bones that predisposes to fractures may be diet related as the fracture rate is halved among individuals in the higher calcium intake range as compared with those on low calcium diets. However, there are regions where the lower rates of fracture due to osteoporosis are associated with lower calcium intakes. For example, the rates are lower in Singapore as compared with the United States, although the calcium intakes are lower than the United States. The traditional emphasis on calcium intakes possibly reflects the recognition of its importance in contributing to the density of bone during growth and the need for attaining dense bones at the peak of adult life. High protein and high salt diets are known to increase bone loss, whereas calcium supplements, well above what may be considered physiological, in postmenopausal women, may help to reduce the rate of bone loss and slow down the development of osteoporosis.
It is generally believed that populations in developing countries are at less risk of developing osteoporosis. This is in spite of low calcium intakes, and it may be related to the fact that they do more physical work, smoke less, drink less alcohol, and have diets that are generally not high in protein or salt content. However, osteoporosis is seen in developing countries in regions where low intakes of dietary calcium are associated with high fluoride intakes. No osteoporosis occurs if high intakes of fluoride are accompanied by dietary intakes of calcium, which are also high.
Diet and dental caries
Dental caries is a common disease of the teeth that results in decay of the tooth surface, usually beginning in the enamel. An essential feature in the causation of dental caries is dental plaque, which is largely made up of microorganisms. Dietary sugars diffuse into the dental plaque where they are metabolized by the microorganisms to acids, which can dissolve the mineral phase of the enamel causing dental decay. The process is, however, much more complex and is related to the quantity and quality of saliva produced in the mouth among other factors.
The evidence relating diet to dental caries is vast and has been well reviewed (Rugg-Gunn 1993). The overwhelming evidence indicates that sugars are cariogenic. There is good correlation between the sugar supply (in grams per person per day) and the occurrence of dental caries in 12 year olds when data from 47 countries were compiled (Sreebny 1982). The consumption of refined sugar is a recent phenomenon in many parts of the world and seems to have been accompanied by an increase in dental caries in communities that were hitherto free of the problem. Fifty-five cross-sectional studies correlating an individual’s sugar consumption with the incidence of dental caries has demonstrated significant correlations between the two, particularly among young children. It also appears that the consumption of sugars between meals is associated with a marked increase in caries while consumption of sugars with meals is associated only with a small increase. Sucrose seems to be the predominant dietary agent that is cariogenic, although the current emphasis is on the consumption of all free sugars, particularly between meals. Despite suggestions that starch is also cariogenic, careful analysis of epidemiological data from several countries suggests that a much closer relationship exists between dental caries and free sugars than between caries and starchy cereal foods. Fresh fruit, although it contains intrinsic sugars, has a lower cariogenic potential while fruit juices are cariogenic, which may be related to the added sugars in fruit juices or from the lack of adequate salivary stimulation. Food may also contain protective factors that may prevent the occurrence of dental caries. This includes a sufficient daily ingestion of fluoride. Inorganic phosphates in the diet also seem to protect against dental caries.
Prevention of dental caries can be achieved by health education aimed at the individual, beginning in infancy. Avoidance of the addition of free sugars to bottle feeds and milk and fruit drinks are a must. An adequate intake of fluoride is desirable quite early in life. During childhood and adolescence, the restriction of the three major sources that contribute to two-thirds of our intake of sugars (i.e. confectionery, table sugar, and soft drinks) will help reduce the increment of caries in childhood. At local and national level, the main public interventions should include fluoridation of water supply, labelling of foods, and possible changes in agricultural policies that promote the production of free sugars.
Diet and non-cancerous conditions of the large bowel
There are several chronic disorders of the large bowel that are frequently associated with a typical ‘affluent’ diet, which is low in dietary fibre content. This includes diverticular disease, haemorrhoids, and constipation. Constipation occurs when the daily faecal weight falls below 100 g and is associated with slower intestinal transit times. A linear relationship has been demonstrated between non-starch polysaccharide intake in the diet and mean daily stool weights. There is now evidence that the starch content of the diet, particularly if it is cooked and refrigerated after cooking, seems to influence faecal weights and transit times. Thus increasingly both the non-starch polysaccharide and starch intakes reduce the chances of constipation.
Emerging food and nutrition issues of public health concern
Over the last decade several issues of public health concern related to food and nutrition have emerged both in the developed, industrialized West and in developing societies of the world. These include the problems related to the microbiological safety of foods, the frightening prospect of an epidemic of spongiform encephalopathies, concerns related to genetically modified (GM) foods, issues related to labelling of processed foods, and the emerging epidemic of diet-related chronic diseases and obesity in developing societies. Some of these issues will be dealt with briefly in this section.
Food safety
Food safety refers to whether food is safe for human consumption and hence lacking in biological and chemical contaminants that have the potential to cause illness. The increasing concern over the safety of foods in the developed world is a paradox in that the epidemiological evidence on the safety of foods is quite contrary to the perceptions of the public and the media that the food available now is less safe than it used to be. The improvements in public health have virtually eradicated primarily food-borne infections that were until recently associated with considerable morbidity and mortality. The common food-borne diseases currently encountered are usually associated with mild self-limiting gastroenteritis. Studies of risk perception suggest that the public becomes alarmed by health threats that are disproportionate to the actual risk associated with the disease and this public concern is fuelled by the media, which make health issues into media health scares depending on the newsworthiness of the incidents.
There have been several food-borne epidemics in the developed world that have raised concerns about food safety in recent years. These include, for instance, the Salmonella enteritidis pt4 (Se4) epidemic. This was attributed to the ability of Se4 to invade the oviduct of poultry and get deposited in the albumin of the egg. At the consumer level the outbreak of the infection was linked to the use of raw egg in recipes or cross-contamination from raw to cooked foods. Campylobacter infection is the most common food-borne disease in the United Kingdom and the increase in its incidence may partly be explained by the better ascertainment and reporting of cases associated with this infection. The more recent food scare was the emergence of Escherichia coli 0157 in Scotland. The emergence of this food-borne infection that caused several deaths include changes in husbandry and the movement of livestock as well as the rapid growth of the fast food industry and poor food hygiene in these environments. Listeria is another cause of food-borne disease, which is a good example of the role of international trade and globalization in the spread of food-borne diseases.
In the developing world the issues of food safety are related to microbiological agents that contaminate food and water and spread disease rapidly in the warm humid environments of these countries aided by the improper food hygiene practices, poor environmental sanitation, and inadequate regulation of food-related commerce. The safety of foods in the developing world is also compromised by the presence of toxins such as aflatoxins, which result from contamination with mycotoxins due to poor food storage practices or due to cyanogens in the diet due to inadequate preparation of staple foods such as cassava. In addition, the food chain in these poor countries is contaminated by pesticide and chemical residues thus compromising the safety of the food consumed by the populations in these countries.
Food, the bovine spongiform encephalopathy epidemic, and Creutzfeldt–Jakob disease
The most recent public health concern related to food in the United Kingdom and Europe has been the epidemic of bovine spongiform encephalopathy (BSE) with about 170 000 cases diagnosed among cattle by 1998 and the likelihood of over a million cattle having been infected with the causative agent, with most of them slaughtered and eaten before they showed clinical signs of the disease. The concern among consumers was the probability of the link between BSE and the new variant of Creutzfeldt–Jakob disease (nvCJD) due to the infection of humans with the agent responsible for the epidemic of BSE in cattle in the United Kingdom. The key risk factor identified in the case of BSE was the use of commercial concentrate feed and, in particular, the feeding of meat and bone meal as a protein-rich dietary supplement to calves. In the 1980s, largely the result of commercial pressures, changes were made in the rendering process of waste tissues of cattle and sheep. As a result, it is likely that a strain of scrapie (a transmissible spongiform encephalopathy endemic in British sheep) may have crossed from sheep to cattle in the meat and bone meal and adapted itself to infect cattle. An alternate explanation is that the infective agent of BSE was endemic at a low level and that the recycling of rendered waste from cattle allowed it to spread more widely and cause the epidemic in cattle. Whatever the probable cause the changes in the rendering process and the recycling of waste for feed created the right environment for an epidemic of disease to occur among cattle. The consumption of the meat of infected cattle is probably responsible for the increase in incidence of CJD which was diagnosed in young adults and teenagers as nvCJD. Epidemiological evidence strongly links the BSE epidemic with the increasing incidence of nvCJD, although it is not clear given the long incubation period of the disease what the extent of the epidemic is likely to be in the future. BSE and nvCJD are good examples of changes in the food chain influencing the infective process, which is likely to be food-borne, and resulting in considerable controversy and consumer pressure to regulate the food industry more tightly.
Genetically modified foods
Another issue that has emerged over recent years and has created a considerable degree of controversy is the use of biotechnology to produce GM foods. Genetic modification of food crops can be used to reduce food losses by increasing resistance to drought, frost, diseases, and pests, and help control weeds and reduce postharvest losses. Biotechnology can improve the nutritional value of foods, for example, by increasing protein or micronutrient content or by reducing saturated fat content. They could help slow down ripening so that foods retain their quality much longer. Biotechnology can increase both the yield and the quality of crops grown on existing farmland and thereby reduces pressure on wildlife habitats. In the developed world, particularly in the United Kingdom and Europe, the opposition to GM foods is based largely on ecological arguments that raise concerns regarding the ecological damage that may follow large-scale use of GM crops. In the poor, developing countries the concerns are more related to the use of the ‘terminator gene’ technology and the dependence on the large multinationals for seeds and chemicals that the small farmers will inherit. At the heart of this controversy and the raging debate is the gulf between plant breeders, seed, and agrochemical industries who promote biotechnology and the campaigners who argue that GM technology may have hazardous consequences on the environment. This is a debate replete with numerous paradoxes (Dixon 1999) and the climate of mistrust, some of it associated with the not too recent BSE and nvCJD scare, is obscuring the real issues and clouding objective decisions from being made with regard to the production and consumption of GM foods.
Food labelling
An important source of information for the consumer about the food on the supermarket shelf is the label on a food product. Food labels provide information that may be of interest to the consumer, especially with regard to the added chemicals (additives, pesticide residues, colouring and flavouring agents, and preservatives), fats, sugars, and energy content. Although, about two-thirds of shoppers claim to read the information on the labels of new or unfamiliar food products to check their contents, this interest in labels does not mean that consumers always understand the information on the labels. Consumers are even more confused by the nutrition information panel that appears on many food labels.
Food label information is usually designed by experts. A prototype label produced by the Codex Alimentarius Commission of the WHO–FAO which is the organization charged with advising on international food standards. This prototype is followed by Food Standards Committees around the world. According to this prototype the nutrients (energy, carbohydrate, protein, and fats) are listed according to their amounts per serving and per 100 g. Most consumers, however, have hardly any idea of what a 100-g serving is, or for that matter what a normal or average serving is. A further problem is that these labels designed by experts is also beset with problems of terminology. An example is the term ‘carbohydrate’ that covers a wide range of compounds, including sugars and starches, which have quite different health-related properties. Health benefits or nutritional claims are also not meant to be part of the food labels and they also do not provide information to cover ecological and ethical issues, which may be of concern for some consumers. More recently, the need to highlight the source or origin of foods and in particular the labelling of GM sources of the food product has been a serious concern of consumers. Food labelling is an important issue of public health concern and despite the considerable progress made so far there is much to be achieved.
Functional foods
New food products are being marketed as health-enhancing or illness-preventing foods. These are called functional foods, ‘pharmafoods’, ‘nutriceuticals’, or novel foods. Functional foods are generally defined as food products that deliver a health benefit beyond providing nutrients. The health benefits of functional foods may be conferred by a variety of production and processing techniques that include: fortification of certain food products with specific nutrients, using phytochemicals and active micro-organisms, and by genetic modification of foods. The topic of functional foods is complex and controversial. An assumption implicit in the functional foods and health benefit claims is that the food supply needs to be fixed or doctored (or medicalized) on public health grounds. The assumption, therefore, is that the current food supply is in some way deficient, that the habitual diets are inadequate, and that a technological fix will solve the problem. Thus the emerging debate viewed from the perspective of the proponents of functional foods is that these novel foods may reduce health care expenditure by promoting good health and that functional foods are a legitimate nutrition education tool, which will help inform consumers of the health benefits of certain food products. The opponents on the other hand rightly state that it is the total diet that is important for health. They believe that the functional foods are a ‘magic bullet’ approach, which enables manufacturers to indulge in marketing hyperbole, exploit consumer anxiety, and essentially blur the distinction between food and drugs.
Emerging epidemic of diet-related chronic diseases and obesity in developing societies
One of the important issues that is of considerable public health concern in developing countries is the emerging epidemic of diet-related chronic diseases and obesity in the midst of the persisting problem of malnutrition among children and adults. This burgeoning problem of obesity and adult-onset chronic diseases such as CHD, hypertension, and NIDDM and an increase in the incidence of certain cancers is determined by a range of factors that include changes in the diet and lifestyles. Most developing countries, particularly those in rapid developmental transition are in the midst of a demographic and epidemiological transition. Economic development and industrialization is accompanied by rapid urbanization. These developmental forces are bringing about both changes in the social capital of these societies as well as increasing availability of food and changing lifestyles. The changes in food and nutrition are both quantitative and qualitative; there is not only access to more than adequate food among some sections of the population, but also a qualitative change indicative of an increase in fat intake. Lifestyle changes are suggestive of a reduction in physical activity levels, which promote the onset of obesity. Urbanization by migration, economic growth, and the globalization of trade is increasing income disparities, which further contributes to the problem. The poor consumer resistance and inadequate regulation compromises food safety and increases contaminants in the food chain. All these factors contribute in a complex manner to fuel the epidemic of diet-related non-communicable diseases that are likely to emerge as a serious public health problem in the twenty-first century.
Food and nutrition in the prevention of diseases of public health importance
The public health approach to the prevention of nutrition and diet-related diseases requires the adoption of health-oriented nutrition and food policies for the whole population. In most developing countries, the first priority must be ensuring the production or procurement of adequate food supply and its equitable distribution and availability to the whole population along with the elimination of the various forms of nutritional deficiencies, which include protein–energy malnutrition, and vitamin, mineral, and trace-element deficiencies. Efforts must also be made to improve the quality of the food, which includes ensuring food safety while reducing spoilage and contamination of foods as well as diversifying the availability and use of foods. In agrarian societies, consideration must be given to the short- and long-term effects of agricultural policies that affect the income and buying power of the small producers. Particular attention needs to be paid to the impact the promotion of cash crops has on the availability and ability to procure the principal staples in the diet. Special attention needs to be paid to the feasibility of fortification of foods to deal with localized or widespread deficiencies of iodine and iron, as a mass intervention measure.
In developed countries, the burgeoning costs of tertiary health care related to the diagnosis and management of the increasing occurrence and the associated morbidity of chronic diet-related diseases has had an impact. There is an increasing recognition of the need for prevention-oriented health and nutrition policies and changes in behaviour and lifestyle to reduce the occurrence of these diseases. Some developed countries have been active in the field of public education using national dietary guidelines as a major stimulus. It is important to remember that nutrition education of the public operates in the area where advice is given on a balance of probabilities, rather than irrefutable evidence or any degree of certainty. There is bound to be information that does not fit in with the consensus view as the consensus is based on the balance of the available evidence. It is thus possible to apparently appear to refute the expert viewpoint, which often seems to be a popular thing to do. It is important to recognize that the causes of these chronic diseases are complex and dietary factors are only a part of the explanation. Individuals differ in their susceptibility to the adverse health effects of specific dietary factors or deficiencies of others. Within the context of public health the focus is the health of the whole population and interventions are aimed at lowering the average level of risk to the health of the whole population.
Changes in consumer preferences have emerged, initially among the upper socio-economic and educated masses. The media attention, along with the behavioural changes in food preferences and food choices are in turn influencing the industry in the modification of the systems for food production and processing. However, progress in changing consumer behaviour and preferences is by its nature intrinsically rather slow and has until recently largely occurred without support from public policies in any but the health sector. The process of changing unsatisfactory dietary practices and thus promoting health is not easy to achieve both socially and politically. Despite these limitations the occurrence of and mortality associated with some diet-related chronic diseases such as CHD have declined reflecting possible changes in lifestyles of the population.
The dynamic relationship between changes in a population’s diet and changes in its health is reflected well in two critical situations. One is the changes in disease and mortality profiles of migrant populations moving from a low-risk to a high-risk environment. An example of this is the change in disease pattern of the Japanese migrants to the United States. The other more important and rapid change is seen within a country as rural to urban migration occurs or more frequently as a developing country undergoes rapid industrialization and economic development and in the process acquires a dietary change characteristic of the latter and also the disease, morbidity, and premature mortality profile of a developed country. Several other developing countries have urban pockets of affluent diet and lifestyles and similar disease burdens in the midst of problems typical of a poor developing country. Such countries in transition, such as India and Brazil, bear the dual burdens of diseases of affluence and the widespread health problems of a poor country. Developing countries can hence benefit by learning from the experience of dietary change and adverse health effects characteristic of the developed world. In the former countries the aim should be to avoid the diseases and premature deaths related to the affluent diet and lifestyle. By recognizing this problem, governments of developing countries can gain for their people the health benefits of avoiding nutritional deficiencies without encouraging at the same time the development of diet-related non-communicable diseases that invariably accompany economic and technological development.
It is thus possible for a country to achieve a reduction in infant and childhood mortality and an increase in life expectancy by means of the pursuance of health and nutrition policies that aim to provide adequate and equitable access to hygienic and nutritious food and to minimize at the same time the occurrence of diet-related chronic diseases. This in turn will help avoid the social and economic costs of morbidity and premature death in middle age—a period of highest economic activity and productivity to the nation and to society at large. If such a socially and economically desirable goal is to be achieved, then national governments in both developing and developed countries must aim towards achieving a population-based dietary change by providing suitable dietary guidelines (WHO Study Group 1990). In the pursuance of this objective, the FAO and WHO jointly convened a consultation in 1995, the overall purpose of which was to establish the scientific basis for developing and using food-based dietary guidelines (FBDGs) (FAO–WHO 1996).
The development of food-based dietary guidelines
FBDGs are developed and used in order to improve the food consumption patterns and nutritional well-being of individuals and populations. Guidelines would be needed by all countries given the important part that food and dietary practices play in nutrition-related disorders; both due to deficiencies or excesses. FBDGs can address specific health issues without the need to understand fully the biological mechanisms that may link constituents of food and diet with disease. However, FBDGs do take into account the considerable epidemiological data linking specific food consumption patterns with a low or high incidence of certain diet-related diseases.
Disseminating information and educating the public through the FBDGs is a ‘user-friendly’ approach as consumers think in terms of foods rather than nutrients. They provide a means for nutrition education mostly as foods for the public. They are intended for use by individual members of the general public, are written in ordinary language, and, as far as possible, avoid the use of technical terms in nutritional science. FBDGs will vary with the population group and has to take into account the local or regional dietary patterns, practices, and culture. It is important to recognize that more than one dietary pattern is consistent with good health. This will enable the development of food-based strategies that are appropriate for the local region and take the local dietary practices into consideration.
FBDGs can serve as an instrument of nutrition policies and programmes. As they are based directly on diet and health relationships of particular relevance to the individual country or region, they can help address those issues of public health concern, whether they relate to dietary insufficiency or dietary excess. Food and diet are not the only compone000nts of a healthy lifestyle and it is important that other relevant messages related to health promotion are integrated into dietary guidelines.
Chapter References
ACC–SCN (1997). Third report on the world nutrition situation. Sub-Committee on Nutrition, WHO, Geneva.
Arroyave, G., Mejia, L.A., and Aguilar, J.R. (1981). The effect of vitamin A fortification of sugar on serum vitamin A levels of pre-school Guatemalan children: a longitudinal evaluation. American Journal of Clinical Nutrition, 34, 41–9.
Barker, D.J.P. (1995). Fetal origins of coronary heart disease. British Medical Journal, 311, 171–4.
Boyle, P., Earidze, D.G., and Simans, M. (1985). Descriptive epidemiology of colo-rectal cancer. International Journal of Cancer, 36, 9–18.
Briend, A. and Golden, M.H.N. (1993). Treatment of severe child malnutrition in refugee camps. European Journal of Clinical Nutrition, 47, 9–18.
Burgi, H., Supersaxo, Z., and Selz, B. (1990). Iodine deficiency diseases in Switzerland one hundred years after Theodor Kocher’s survey. A historical review with some new goitre prevalence data. Acta Endocrinologia, 123, 577–90.
Callender, J., Grantham-Mcgregor, S., Walker, S., and Cooper, E. (1993). Developmental levels and nutritional status of children with trichuris dysentery syndrome. Transactions of the Royal Society for Tropical Medicine and Hygiene, 87, 528–9.
Cappucio, F.P. and MacGregor, G.A. (1991). Does potassium supplementation lower blood pressure? A meta-analysis of published trials. Journal of Hypertension, 9, 465–73.
de Onis, M., Monteiro, C., Akre, J., and Clugston, G. (1993). The worldwide magnitude of protein-energy malnutrion: an overview from the WHO Global database on child growth. Bulletin of the World Health Organisation, 71, 703–12.
de Onis, M., Blossner, M., and Villar, J. (1998). Levels and patterns of intrauterine growth retardation in developing countries. European Journal of Clinical Nutrition, 52, S5–15.
Dixon, B. (1999). The paradoxes of genetically modified foods. British Medical Journal, 318, 547–8.
Doll, R. and Peto, R. (1981). The causes of cancer. Oxford University Press.
FAO (1985). The fifth world food survey 1985. Food and Agricultural Organization, Rome.
FAO (1995). Dimensions of need: an atlas of food and agriculture. Food and Agricultural Organization, Rome.
FAO–WHO (1996). Preparation and use of food-based dietary guidelines. World Health Organization, Geneva.
Glieberman, L. (1973). Blood pressure and dietary salt in human populations. Ecology of Food and Nutrition, 2, 143–56.
Golden, M.H.N., Briend, A., and Grellety, Y. (1995). Report of a meeting on supplementary feeding programmes with particular reference to refugee populations. European Journal of Clinical Nutrition, 49, 137–45.
Grantham-McGregor, S. (1995). A review of studies of the effect of severe malnutrition on mental development. Journal of Nutrition, 125, 2232S–8S.
Hetzel, B.S. (1987). Progress in the prevalence and control of iodine deficiency disorders. Lancet, ii, 266–7.
Intersalt Cooperative Research Group (1988). Intersalt: an international study of electrolyte excretion and blood pressure. British Medical Journal, 298, 920–4.
James, W.P.T., Ferro-Luzzi, A., and Waterlow, J.C. (1988). Definition of chronic energy deficiency in adults. Report of Working Party of IDECG. European Journal of Clinical Nutrition, 42, 969–81.
Keys, A. (1980). Seven countries: A multivariate analysis of death and coronary heart disease. Howard University Press, Cambridge, MA.
Khanum, S., Ashworth, A.H., and Huttly, S.R.A. (1994). Controlled trial of three approaches to the treatment of severe malnutrition. Lancet, 344, 1728–32.
Kritchevsky, D. (1986). Diet nutrition and cancer. The role of fibre. Cancer, 58 (Supplement 8), 1830–6.
Law, M.R., Frost, C.D., and Wald, N.J. (1991). By how much does dietary salt reduction lower blood pressure? British Medical Journal, 302, 811–24.
Law, M.R., Wald, N.J., and Thompson, S.G. (1994). By how much and how quickly does reduction in serum cholesterol concentration lower rate of ischaemic heart disease ? British Medical Journal, 302, 811–24.
Lissner, L. and Heitmann, B.L. (1995). Dietary fat and obesity: evidence from epidemiology. European Journal of Clinical Nutrition, 49, 969–81.
McKeown, T. (1976). The modern rise of population. Edward Arnold, London.
MacMohan, S., Peto, R., Cutler, J., et al. (1990). Blood pressure, stroke and coronary heart disease. Lancet, 335, 65–774.
Martin, M.J., Hulley, S.B., Browner, W.S., et al. (1986). Serum cholesterol, blood pressure and mortality: implications from a cohort of 361 662 men. Lancet, ii, 933–6.
Muhilal, P.D., Idjrodinata, Y.R., and Karyadi, D. (1988). Vitamin A fortified monosodium glutamate and health, growth and survival of children: a controlled field trial. Americal Journal of Clinical Nutrition, 48, 1271–6.
Pharoah, P.O.D. and Connolly, D.C. (1987). A controlled trial of iodinated oil for the prevention of endemic cretinism: a long term follow-up. International Journal of Epidemiology, 16, 68–73.
Pollit, E. (1991a). Effects of diet deficient in iron on the growth and development of preschool children. Food and Nutrition Bulletin, 13, 110–18.
Pollitt, E. (1991b). Iron deficiency and cognitive function. Annual Review of Nutrition, 13, 521–37.
Reutlinger, S. (1982). World Bank research on the hunger dimension of the food problem. Research News, World Bank, pp. 3–4.
Rugg-Gunn, A.J. (1993). Nutrition and dental health. Oxford University Press.
Scottish Office Home and Health Department (1993). The Scottish diet: report of a working party to the Chief Medical Officer in Scotland. The Scottish Office, Edinburgh.
Shetty, P.S. (1994). Assessing malnutrition in the community. The Biochemist, 16, 21–4.
Shetty, P.S. and James, W.P.T. (1994). Body mass index: an objective measure for the estimation of chronic energy deficiency in adults. FAO Food and Nutrition Paper. Food and Agricultural Organization, Rome.
Sreebny, L.M. (1982). Sugar availability, sugar consumption and dental caries. Community Dentistry: Oral Epidemiology, 10, 1–7.
Stephenson, L.S., Latham, M.C., Adams, E.J., Kinoti, S.N., and Pertet, A. (1993). Physical fitness, growth and appetite of school boys with hookworm, Trichuris trichuria, and Ascaris lumbricoides infections are improved four months after a single dose of albendazole. Journal of Nutrition, 123, 1036–46.
Stoltzfus, R.J., Chwaya, H.M., Tielsch, J.M., Schulze, K.J., Albonico, M., and Savioli, L. (1997). Epidemiology of iron deficiency anaemia in Zanzibari schoolchildren: the importance of hookworms. American Journal of Clinical Nutrition, 65, 153–9.
UNICEF (1997). State of the world’s children. UNICEF, New York.
Viteri, F.E. (1997). Iron supplementation for control of iron deficiency in populations at risk. Nutrition Reviews, 55, 195–209.
Waterlow, J.C. (1992). Protein-energy malnutrition. Edward Arnold, London.
West, K.P. and Sommer, A. (1985). Delivery of oral doses of vitamin A to prevent vitamin A deficiency and nutritional blindness. Food Reviews International, 1, 355–418.
Whittemore, A.S., Wu-Williams, A.H., Lee, M., et al. (1990). Diet, physical activity and colorectal cancer among Chinese in North America and China. Journal of the National Cancer Institute, 82, 915–26.
WHO (1991). Prevention of childhood blindness. World Health Organization, Geneva.
WHO (1995). Physical status: the use and interpretation of anthropometry. World Health Organization, Geneva.
WHO (1997). World health report. World Health Organization, Geneva.
WHO (1998). Obesity: Preventing and managing the global epidemic. Report of a WHO Consultation on obesity. World Health Organization, Geneva.
WHO Expert Committee (1982). Prevention of coronary heart disease. Technical Report Series. World Health Organization, Geneva.
WHO Study Group (1990). Diet, nutrition and the prevention of chronic disease. WHO Technical Report Series 797. World Health Organization, Geneva.
WHO–UNICEF–ICIDD (1993). Micronutrient Deficiency Information System (MDIS). Global prevalence of iodine deficiency disorders. MDIS Working paper. Joint WHO–UNICEF–ICIDD publication.
Willett, W.C. (1989). The search for causes of breast and colon cancer. Nature, 338, 389–94.
Willett, W.C., Stampfer, M.J., Colditz, G.A., Rosner, B.A., and Speitzer, F.E. (1990). Relation of meat, fat and fibre intake to the risk of colon cancer in a prospective study among women. New England Journal of Medicine, 323, 1664–72.
Willett, W.C., Hunter, D.J., Stampfer, M.J., et al. (1992). Dietary fat and fibre in relation to role of breast cancer. Journal of American Medical Association, 268, 2037–44.
World Cancer Research Fund (1997). Food, nutrition and the prevention of cancer: a global perspective. World Cancer Research Fund and American Institute for Cancer Research, Washington, DC.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: