Monthly Archives: April 2015

Fats increase prostate cancer risk

Elevated cholesterol and triglycerides may increase the risk for prostate cancer recurrence


American Association for Cancer Research News, 10/13/2014

Higher levels of total cholesterol and triglycerides, two types of fat, in the blood of men who underwent surgery for prostate cancer, were associated with increased risk for disease recurrence, according to a study published in Cancer Epidemiology, Biomarkers & Prevention. “While laboratory studies support an important role for cholesterol in prostate cancer, population–based evidence linking cholesterol and prostate cancer is mixed,” said Emma Allott, PhD, postdoctoral associate at Duke University School of Medicine in Durham, North Carolina. “Understanding associations between obesity, cholesterol, and prostate cancer is important given that cholesterol levels are readily modifiable with diet and/or statin use, and could therefore have important, practical implications for prostate cancer prevention and treatment. “Our findings suggest that normalization, or even partial normalization, of serum lipid levels among men with dyslipidemia may reduce the risk of prostate cancer recurrence,” said Allott.

Misrepresentation of the risk of ovarian cancer among women using menopausal hormones

There is a fair bit of evidence that HRT does not contribute to ovarian cancer. Luckily, in all the women I have treated over the last 15 years with BHRT, I have not had 1 person get ovarian cancer, very fortunately. However, I am always alert and on the lookout for it ( check an earlier post for the signs of ovarian cancer).
Maturitas. 2015 Mar 31. pii: S0378-5122(15)00608-8. doi: 10.1016/j.maturitas.2015.03.018. [Epub ahead of print]

Misrepresentation of the risk of ovarian cancer among women using menopausal hormones. Spurious findings in a meta-analysis.

Abstract

BACKGROUND:

Based on a meta-analysis of 52 studies, and principally on a meta-analysis of 17 follow-up studies, it has been claimed that current-or-recent use (last use <5 years previously) of menopausal hormones causes ovarian cancer, even if the duration of use was <5 years, and that women aged about 50 years who use hormones for >5 years have about one extra case per 1000 users, and one extra fatal case per 1700 users.

OBJECTIVE:

To evaluate the validity of the evidence.

METHODS:

Generally accepted epidemiological principles of causation were applied to the evidence.

FINDINGS:

The study base included hysterectomised women, an unknown proportion of whom were oophorectomised, and not at risk for ovarian cancer. The findings did not satisfy the criteria of time order, bias, confounding, strength of association, dose-response, duration-response, consistency, and biological plausibility.

CONCLUSIONS:

The meta-analysis did not establish that current-or-recent use of menopausal hormones causes ovarian cancer. The strong likelihood is that early symptoms of as yet undiagnosed ovarian cancer “caused” current-or-recent short-duration hormone use, not the reverse. The representation of the number of extra cases, and fatal cases, among hormone users was misleading and alarmist.

Copyright © 2015 Elsevier Ireland Ltd. All rights reserved

Nature, not nurture, holds key to gut bacteria

12 August 2014, 6.26am AEST

Nature, not nurture, holds key to gut bacteria

The types of bacteria that colonise an infant’s developing gut are influenced more by internal development than childbirth…

Bacterial communities in the gut assemble within weeks of birth in distinct, patterned progressions. Flickr: bradleypjohnson, CC BY

The types of bacteria that colonise an infant’s developing gut are influenced more by internal development than childbirth or early nutrition, according to a study published today in the journal PNAS.

Intestinal bacteria, also known as the microbiome, is vital for good health throughout life because it affects everything from daily digestion to the long-term functioning of our immune systems.

But little is understood about how the gut of a newborn infant transitions from being sterile to becoming densely inhabited with bacteria.

Researchers from the Washington University School of Medicine discovered these bacterial communities, which assemble within the gut just weeks after birth, colonise in distinct, patterned progressions.

Their finding suggested abrupt shifts in the different types of gut bacteria were closely associated with the infant’s developmental stage.

Neurogastroenterology expert and senior lecturer at RMIT Paul Bertrand said he thought early gut bacteria would change with the environment and was surprised to see the level of change purely associated with gut development.

“I think this study is significant because it highlights that the gut has an active role in choosing the sorts of bacteria that live there. It’s not just a passive progression as you fill up with bacteria,” Bertrand said.

“It seems more likely there is an overall plan that, as the immune system in the gut develops, the bacteria are encouraged to either establish themselves or fade away. I think the study is very useful because it gives people a better understanding of how the gut is managing colonies of bacteria.”

Cheryl Power from the University of Melbourne’s Department of Microbiology and Immunology described the microbiome as the collection of microorganisms that live both in and on the human body, made up of thousands of strains of bacteria.

While we do not benefit directly from the activities of some microbes in our gut, others are extremely helpful because they provide important vitamins, discourage colonisation by disease-producing or pathogenic bacteria and play an important role in the development of the immune system, she said.

“I think we’ve grossly underestimated the importance of the bugs that live in and on us, and regard them all as nasty germs when in fact we’re discovering they have hugely important functions,” she said.

“We already know there is a relationship between our microbiome and our immune system. It’s just a matter of now of looking at how early this interaction between the two systems starts.

The palaeolithic diet and the unprovable links to our past

28 November 2014, 6.25am AEDT

The palaeolithic diet and the unprovable links to our past

We still hear and read a lot about how a diet based on what our Stone Age ancestors ate may be a cure-all for modern ills. But can we really run the clock backwards and find the optimal way to eat? It’s…

Harking back to the diet of the caveman. Flickr/George , CC BY-NC-SA

We still hear and read a lot about how a diet based on what our Stone Age ancestors ate may be a cure-all for modern ills. But can we really run the clock backwards and find the optimal way to eat? It’s a largely impossible dream based on a set of fallacies about our ancestors.

There are a lot of guides and books on the palaeolithic diet, the origins of which have already been questioned.

It’s all based on an idea that’s been around for decades in anthropology and nutritional science; namely that we might ascribe many of the problems faced by modern society to the shift by our hunter-gatherer ancestors to farming roughly 10,000 years ago.

Many advocates of the palaeolithic diet even claim it’s the only diet compatible with human genetics and contains all the nutrients our bodies apparently evolved to thrive on.

While it has a real appeal, when we dig a little deeper into the science behind it we find the prescription for a palaeolithic diet is little more than a fad and might be dangerous to our health.

Mismatched to the modern world

The basic argument goes something like this: over millions of years natural selection designed humans to live as hunter-gatherers, so we are genetically “mismatched” for the modern urbanised lifestyle, which is very different to how our pre-agricultural ancestors lived.

The idea that our genome isn’t suited to our modern way of life began with a highly influential article by Eaton and Konner published in the New England Journal of Medicine in 1985.

Advocates of the palaeolithic diet, traceable back to Eaton and Konner’s work, have uncritically assumed a gene-culture mismatch has led to a epidemic in “diseases of civilisation”.

Humans are, it’s argued, genetically hunter-gatherers and evolution has been unable to keep pace with the rapid cultural change experienced over the last 10,000 years.

These assumptions are difficult to test or even outright wrong.

What did our Stone Age ancestors eat?

Proponents of the palaeolithic diet mostly claim that science has a good understanding of what our hunter-gatherer ancestors ate.

Let me disavow you of this myth straight away – we don’t – and the further back in time we go the less we know.

What we think we know is based on a mixture of ethnographic studies of recent (historical) foraging groups, reconstructions based on the archaeological and fossil records and more recently, genetic investigations.

We need to be careful because in many cases these historical foragers lived in “marginal” environments that were not of interest to farmers. Some represent people who were farmers but returned to a hunter-gatherer economy while others had a “mixed” economy based on wild-caught foods supplemented by bought (even manufactured) foods.

The archaeological and fossil records are strongly biased towards things that will preserve or fossilise and in places where they will remain buried and undisturbed for thousands of years.

What this all means is we know little about the plant foods and only a little bit more about some of the animals eaten by our Stone Age ancestors.

Many variations in Stone Age lifestyle

Life was tough in the Stone Age, with high infant and maternal mortality and short lifespans. Seasonal shortages in food would have meant that starvation was common and may have been an annual event.

People were very much at the mercy of the natural environment. During the Ice Age, massive climate changes would have resulted in regular dislocations of people and the extinction of whole tribes periodically.

Strict cultural rules would have made very clear the role played by individuals in society, and each group was different according to traditions and their natural environment.

This included gender-specific roles and even rules about what foods you could and couldn’t eat, regardless of their nutritional content or availability.

For advocates of the palaeolithic lifestyle, life at this time is portrayed as a kind of biological paradise, with people living as evolution had designed them to: as genetically predetermined hunter-gatherers fit for their environment.

But when ethnographic records and archaeological sites are studied we find a great deal of variation in the diet and behaviour, including activity levels, of recent foragers.

Our ancestors – and even more recent hunter-gatherers in Australia – exploited foods as they became available each week and every season. They ate a vast range of foods throughout the year.

They were seasonably mobile to take advantage this: recent foraging groups moved camps on average 16 times a year, but within a wide range of two to 60 times a year.

There seems to have been one universal, though: all people ate animal foods. It depended where on the planet you lived: rainforests provided few mammal resources, while the arctic region provided very little else.

Studies show on average about 40% of their diet comprised hunted foods, excluding foods gathered or fished. If we add fishing, it rises to 60%.

Even among arctic people such the Inuit whose diet was entirely animal foods at certain times, geneticists have failed to find any mutations enhancing people’s capacity to survive on such an extreme diet.

Research from anthropology, nutritional science, genetics and even psychology now also shows that our food preferences are partly determined in utero and are mostly established during childhood from cultural preferences within our environment.

The picture is rapidly emerging that genetics play a pretty minor role in determining the specifics of our diet. Our physical and cultural environment mostly determines what we eat.

Evolution didn’t end at the Stone Age

One of the central themes in any palaeolithic diet is to draw on the arguments that our bodies have not evolved much over the past 10,000 years to adapt to agriculture-based foods sources. This is nonsense.

There is now abundant evidence for widespread genetic change that occurred during the Neolithic or with the beginnings of agriculture.

Large-scale genomic studies have found that more than 70% of protein coding gene variants and around 90% of disease causing variants in living people whose ancestors were agriculturalists arose in the past 5,000 years or so.

Textbook examples include genes associated with lactose intolerance, starch digestion, alcohol metabolism, detoxification of plant food compounds and the metabolism of protein and carbohydrates: all mutations associated with a change in diet.

The regular handling of domesticated animals, and crowded living conditions that eventually exposed people to disease-bearing insects and rodents, led to an assault on our immune system.

It has even been suggested that the light hair, eye and skin colour seen in Europeans may have resulted from a diet poor in vitamin D among early farmers, and the need to produce more of it through increased UV light exposure and absorption.

So again, extensive evidence has emerged that humans have evolved significantly since the Stone Age and continue to do so, despite some uninformed commentators still questioning whether evolution in humans has stalled.

A difficult choice

In the end, the choices we make about what to eat should be based on good science, not some fantasy about a lost Stone Age paradise.

In other words, like other areas of preventative medicine, our diet and lifestyle choices should be based on scientific evidence not the latest, and perhaps even harmful, commercial fad.

If there is one clear message from ethnographic studies of recent hunter-gatherers it’s that variation – in lifestyle and diet – was the norm.

There is no single lifestyle or diet that fits all people today or in the past, let alone the genome of our whole species.

What happens when kids don’t eat breakfast?

4 November 2014, 6.27am AEDT

What happens when kids don’t eat breakfast?

How many times have we heard that breakfast is the most important meal of the day? There’s overwhelming evidence to suggest that it is, especially for children. Eating breakfast has been shown to improve…

If you don’t eat breakfast in the morning, it’s likely your kids won’t either. Kris Kesiak/Flickr, CC BY-NC

How many times have we heard that breakfast is the most important meal of the day? There’s overwhelming evidence to suggest that it is, especially for children. Eating breakfast has been shown to improve children’s behaviour at school, and poor eating patterns can impair adolescent growth and development.

Put simply, a good quality breakfast helps provide young people with the energy they need for the day, and the nutrients they need to grow and develop.

Fuel for the school day

In the short-term, eating a good quality breakfast can increase feelings of alertness and motivation to learn. Children’s high metabolic turnover and rapid growth rates mean they need optimal nutrition. They have higher demands on their glycogen (or energy) stores overnight as they sleep, and as they generally sleep longer than adults, children have a longer “fasting” time (longer time without food overnight). Therefore, eating a nutritious breakfast is especially important to provide fuel for the oxidation of glucose.

When blood glucose levels are low, hormones such as adrenalin and cortisol are released which can cause feelings of agitation and irritability. This can then affect a child’s concentration and may even cause destructive outbursts. Children who don’t eat breakfast struggle to summon enough energy in the morning to cope with the demands of school.

Long-term effects

Eating a good breakfast can lead to better academic performance and a higher enjoyment of school. Also, children who regularly skip breakfast are more likely to be disruptive in class or to be absent from school. Repeatedly eating breakfast can lead to children learning to associate feelings of well-being with feeling less hungry.

In the long-term, eating breakfast affects a child’s health, which in turn will have a positive effect on brain performance. Research has found that a good nutritional profile can lead to sustained improved performance. This would be much harder to achieve if kids skip breakfast.

Eating a breakfast with a range of food groups is linked to better mental health in young people. Ralph Daily/Flickr, CC BY
Click to enlarge

There is also an association with mental health and a good quality breakfast. Common breakfast foods such as milk, fortified breakfast cereals and bread are good sources of nutrients that affect brain function. Research has found that eating a breakfast with a variety of food groups that increase the intake of vitamins and minerals at the start of the day can lead to better mental health in adolescents.

Children who skip breakfast are also more likely to snack. Snacks eaten between meals can provide up to one-quarter of the daily energy intake in some adolescent populations. Since snacking is often associated with energy dense food linked to the development of childhood overweight and obesity, educating children into a good breakfast routine at the start of the day is essential.

Not enough kids are eating breakfast

Breakfast skipping is common among adolescents and adults in western countries. Teen girls are the least likely to eat in the morning. A study of 10,000 children and young people found that approximately 20% of children and more than 31% of adolescents skipped breakfast regularly.

The reasons given for not eating breakfast are usually poor time management or lack of appetite. But it’s also linked to parental influence: whether a parent does or doesn’t eat breakfast affects whether their children will.

Health-compromising behaviours and unhealthy lifestyles have also been linked with breakfast skipping in young people. Smoking, alcohol and caffeine consumption are more likely among individuals who rarely eat breakfast.

What can we do about it?

Due to the importance of a good breakfast and the association with mental alertness among children, breakfast clubs are becoming increasingly normal in primary schools. A review on the efficacy of school feeding programs found that many programs are done so to address the nutritional deficiencies that affect brain growth and performance in students.

School breakfast programs are not new in Australia and can be traced back to the late 1970s. The Australian Red Cross’ Good Start Breakfast Club has been developed in an attempt to combat food insecurity and disadvantage in low-socio-economic areas. Programs like this help local communities develop breakfast programs that suit schools’ needs by providing fact sheets on issues such as funding the programs and sourcing volunteers.

But there’s limited evidence as to how well school breakfast clubs do in increasing children’s breakfast consumption. And some researchers suggest there is even a lack of solid evidence on the benefits of eating breakfast on cognitive or academic performance. They say that school breakfast programs should not be used as an argument to bolster school performance.

Eating habits formed in adolescence continue into adulthood. Therefore, poor dietary patterns among young people have important implications for their life-long health and well-being. Continued education around the significance of eating a nourishing breakfast for children, adolescents and parents is essential.

Because parental influences can determine whether children and adolescents eat breakfast, encouraging parents to eat breakfast regularly can play an important role in getting kids to eat in the morning.

Wake up and smell the coffee … it’s why your cuppa tastes so good

21 October 2014, 6.12am AEDT

Wake up and smell the coffee … it’s why your cuppa tastes so good

Welcome to our three-part series Chemistry of Coffee, where we unravel the delicious secrets of one of the most widely consumed drinks in the world. So while you enjoy your morning latte, long black or…

The smell of freshly brewed coffee is hard to beat. Michael Yan/Flickr, CC BY-NC-ND

Welcome to our three-part series Chemistry of Coffee, where we unravel the delicious secrets of one of the most widely consumed drinks in the world. So while you enjoy your morning latte, long black or frappe, read on to find out why it tastes so good – you might be surprised to discover what’s in your cup.


Most of what we taste we actually smell. The only sensations that we pick up in our mouth are sweet, sour, bitter, umami and salty. Without its smell, coffee would have only a sour or bitter taste due to the organic acids. Try it with your next cup of coffee – hold your nose as you take your first sip.

The rich satisfying sensation of coffee is almost entirely due to the volatile compounds produced when we roast coffee beans.

The compounds that are formed in the roasting process are very similar to any other compound that is formed in the cooking process. The smell of baking bread is from compounds produced when a sugar reacts with a protein in what is called a Maillard reaction.

Not every scent is as welcoming as freshly baked bread, though. Our sense of smell has developed over millennia to detect dangerous compounds.

Cadaverine and putracine, produced in rotting meat, can be detected by our nose at very low concentrations. The same can be said of sulphur-containing compounds such as hydrogen sulphide – rotten egg gas – which is detected by our nose at levels of parts per billion.

Coffee has some of the same ‘scent’ compounds as freshly baked bread. jm_photos/flickr, CC BY

The upshot of this is that we do not detect all compounds in our surroundings to the same extent. For example, to us water is completely odourless although it may be very concentrated in the atmosphere.

Odour chemists have developed a system called odour activity values which show how we respond to particular compounds. This has an influence on how we experience a complex mixture of stimuli.

Flavourists and perfumists have developed a series of descriptors, or words that are used to describe a particular smell. Using gas chromatography equipped with a sniffer port, chemists are able to smell individual compounds as they come off the gas chromatography column and apply a description to what they experience.

Words such as fruity, earthy, flowery, caramel-like, spicy and meaty are used to describe the odour of individual compounds. It is this complex mixture of volatile organic compounds that we can identify with a particular food. The smell of baking bread can easily be distinguished from the smell of cooking cabbage; a lamb roast from a pork roast.

Yet it is not one compound that is responsible for the odour that we experience, but a complex mixture of hundreds of different compounds.

What we smell in coffee

Approximately 800 different compounds are produced in the coffee-roasting process. These thermal degradation reactions decompose sugars and proteins to form the volatile compounds that we smell.

Most of these reactions take place within the thick walls of coffee bean cells, which act as tiny pressure chambers. Not all of these 800 compounds cause the same response in the olfactory membrane in your nose, though.

Green (unroasted) coffee tastes very grassy when brewed. You still get the organic acids and caffeine in the brew but it lacks the full sensation because there are few volatile compounds due to the lack of roasting.

The profile of roast coffee includes only 20 major compounds, but it is the influences of some of the minor compounds that determine the overall taste that we experience.

When chemists are analysing the volatile compounds in coffee a huge range of different odour qualities are experienced.

Some of the nitrogen-containing compounds such as pyridine can actually smell quite foul, while others can smell quite fruity.

Other compounds have descriptors such as putrid or rancid. One compound, 5- methyl furfural, is described only as coffee-like. But it is the rich mixture of hundreds of different volatile compounds that, when we smell it, can only be described as “coffee”

Informed Aussies less likely to want a prostate cancer test

3 February 2015, 3.02pm AEDT

Informed Aussies less likely to want a prostate cancer test

Screening for prostate cancer using the prostate specific antigen (PSA) test for men with no symptoms is controversial: experts are divided, and Australians are not routinely well-informed. Prostate cancer…

Well-informed men see prostate cancer testing as an individual decision rather than a public health priority. Miguel Pires da Rosa/Flickr, CC BY-SA

Screening for prostate cancer using the prostate specific antigen (PSA) test for men with no symptoms is controversial: experts are divided, and Australians are not routinely well-informed.

Prostate cancer is a slow-growing cancer. Men are more likely to die with it – and not ever know they had it – than die from it.

Although there is some evidence to suggest PSA testing might improve prostate cancer survival, there is also evidence of over-diagnosis. This is where men are diagnosed and treated for a cancer that would never have harmed them.

Our recently published research shows that after accessing high-quality, evidence-based information about the risks and benefits, men are less likely to want a PSA test.

Why the confusion?

Many countries do not recommend testing for prostate cancer in men without symptoms. While there is no national screening program, Australian health practitioners conduct opportunistic testing for men with no symptoms.

Because PSA is a simple blood test, it is often added to a man’s blood work-up conducted for other reasons such as cholesterol or fasting blood glucose. By adding a PSA request to the pathology form (sometimes without their knowledge), asymptomatic men are being screened for cancer: whether they want to be or not.

Late last year the National Health and Medical Research Council of Australia (NHMRC) released its Information for Health Practitioners report. Most guidelines (including the NHMRC’s) recommend men should be informed of both the potential benefits and harms associated with screening for prostate cancer, but this is far from routine.

Asking the men

To find out more about whether information about the harms and benefits of PSA screening made a difference to men’s health decisions, we used a novel methodology know as citizen – or community – jury.

We randomly allocated 27 volunteer men aged 50 to 70 to either a two-day “community jury” group or to a control group. The control group was given PSA fact sheets available from the Cancer Council Australia and Andrology Australia websites.

After accessing good evidence, men are less likely to want to have a PSA test. Tony Alter/Flickr, CC BY

The community jury group also received these fact sheets plus two days of expert information, questions and deliberation. On Saturday, three experts gave “evidence”:

  • Professor Jim Dickinson, Professor of Family Medicine, University of Calgary, gave information about basic prostate biology and cancer
  • Professor Paul Glasziou, general practitioner and expert in evidence-based medicine (and co-author of this article), gave “evidence” about screening for prostate cancer, emphasising the potential harms
  • Professor Frank Gardiner, urologist and prostate cancer expert at the University of Queensland, gave “evidence” emphasising the potential benefits.

(You can watch videos of the presentations here and here).

Men in the community jury group were then able to ask the experts questions about the information.

On Sunday, men in the community jury discussed the information with a facilitator, put forward any further questions to the experts, and then deliberated as a group without a facilitator on two “community questions”:

  1. Should government campaigns be provided on PSA screening and, if so, what information should be included in those campaigns?
  2. What do you as a group of men think about a government-organised invitation program for testing for prostate cancer?

We also asked the men whether they would personally get screened for prostate cancer using a PSA test.

Information matters

Most men were surprised by the information. The men in the community jury voted unanimously against a government campaign targeting the public about PSA screening and against an organised invitation program. The men agreed that:

We don’t want the government to invite us or our mates to come along and get tested. We don’t want that to happen because we don’t want our mates to worry. We don’t want people to make a fuss, we don’t want our government to waste our money.

Instead, and unprompted, the men recommended a campaign that targeted general practitioners to assist them in providing consistent information about the harms and benefits for PSA screening to their patients.

Giving a voice to people impacted by policy is essential to good governance. hapal/Flickr, CC BY-ND

We asked all men before and after the study whether they intended to be tested for prostate cancer using a PSA test in the future.

Before the study, men in both groups had similar intentions: most intended to be tested.

After the study, more men in the community jury had changed their minds about being screened for prostate cancer (fewer intended to have the test). This stayed the same when asked three months later. As one participant stated:

I was of the opinion when I came in that every man over 60 should be screened as a matter of fact, but now I think I’ve changed my ideas, that it’s a personal decision.

Not all men in the community jury group changed their individual decision to be tested, but all men didn’t want information campaigns or invitation programs for screening. The men were clearly able to differentiate between a public health priority and an individual health decision.

Community juries and health policies

Health screening tests are on the rise. Depending on your age, you may tested for a range of “conditions” for which you have no symptoms, such as osteoporosis or vitamin D deficiency. But while all carry the risk of over-diagnosis, few screening tests offer proven benefits.

To develop appropriate screening policies, and determine where to direct health funds in a cash-strapped fiscal climate, we need to hear from the people impacted by these policies. Community juries offer a platform for governments to hear the voice of people who have been well informed by experts and consider the information according to both individual values and social benefits.

Community juries have already been used in South Australia to inform government policy and are a promising tool to help shape health policy across local, state and national jurisdictions.

A Commentary on a recent update of the ovarian cancer risk attributable to menopausal hormone therapy.

One of the concerns women  have is the risk of ovarian cancer on HRT. This study below indicates that the risk is very small. Remember this study was done on the synthetic HRT that most women receive in menopause. The risk is likely to be even lower if the dose is individually adjusted and a more natural form of HRT is used as well.

Climacteric. 2015 Mar 27:1-3.

A Commentary on a recent update of the ovarian cancer risk attributable to menopausal hormone therapy.

Abstract

The incidence of ovarian cancer is tenfold lower than that of breast cancer. The goal of the recently published meta-analysis by Beral and colleagues, using ‘individual participant datasets from 52 epidemiological studies’, was to provide an updated assessment of the effect of menopausal hormone therapy (MHT) on ovarian cancer risk. The relative risk generated from the cited prospective studies was significantly increased but the relative risk from the retrospective studies was not. This is quite unusual since retrospective studies usually display higher levels of relative risk. No further increase was observed with increasing duration. Moreover, a number of the studies could not be adjusted for important ovarian cancer risk factors.

From the meta-analysis, it can be calculated that the absolute excess risk of 5 years of MHT for a 50-year-old UK woman is 1 in 10 000 per year, indicating a very low risk.

We conclude that this meta-analysis mostly reflects the previously published data from the Million Women Study, from which the majority of this new publication is derived.

Working women and the menopause.

A major part of my practice is dealing with menopausal women. I am well aware of the great impact menopause has on the quality of life of many of these women. Many men, employers and coworkers have no idea what women have to cope with in the menopause. I have often heard from women that other women are often the worst in being less understanding and nasty about their suffering.This study in Climacteric, the Journal of the North American Menopause Association, puts this in perspective.
Climacteric. 2015 Apr 1:1-4. [Epub ahead of print]

Working women and the menopause.

Abstract

Women are living longer, working more and retiring later. About 45% of the over 50-year-old workforce in virtually all forms of employment are women, all of whom will experience the menopause and its symptoms, which in some women will be mild to moderate, whilst in others they may be severe and debilitating. About half of these women will find it somewhat, or fairly difficult, to cope with their work, about half will not be affected and only about 5% will be severely compromised. Poor concentration, tiredness, poor memory, depression, feeling low, lowered confidence, sleepiness and particularly hot flushes are all cited as contributing factors. As with any longstanding health-related condition, the need for support and understanding from line management is crucial and can make a major difference to how a woman will deal with the adverse impact the menopausal symptoms may have on her productivity, her job satisfaction and her efficiency. A number of plausible strategies have been proposed that can be realistically implemented in the workplace and which could certainly make a significant difference. Careful thought, planning, consideration and effort may be required but, if instituted, they will, in the final analysis, benefit both employer and employee.

Younger window of opportunity for HRT

Younger window of opportunity for HRT

7 January, 2015 0 comments


Younger window of opportunity for HRT

HRT should be recommended for perimenopausal women and those with premature menopause, a review of current research shows.

Published  in Obstetrician & Gynaecologist, the study highlights the benefits of HRT for younger women heading into menopause and symptomatic women under 45.

Women with premature ovarian insufficiency should be strongly advised to consider taking HRT until the average menopausal age of 51.4 years, the UK authors say, noting that this age group is prone to an earlier onset of both CVD episodes and osteoporosis.

“The risk of breast cancer with HRT use in these women is deemed to be no greater than the population risk for their age, while the benefits are greater by prevention of long-term morbidity,” write the researchers.

“Hence it is strongly advised that these women should consider taking HRT, at least until the age of 50.”

Bisphosphonates are not considered first-line treatment for these patients, say the researchers led by Dr Shagaf Bakour, a consultant gynaecologist at City Hospital in Birmingham.

They say HRT is effective for symptomatic perimenopausal women. “To avoid unnecessary investigation of unscheduled bleeds, these women should be commenced on sequential (cyclical) HRT (that is continuous oestrogen with progestogen for 12-14 days per month).

“If periods are reasonably frequent, then HRT should start with the next bleed, but if infrequent then HRT can be commenced without awaiting a period.”

Once established on HRT, they say an annual review is all that is necessary.

“HRT does not increase blood pressure and there is no indication to monitor more frequently.”

For the younger woman who still hopes for sporadic ovulation, Dr Bakour and colleagues suggest sequential therapy can be continued until this hope becomes unrealistic.

“No doctor should be concerned about discussing the risks and benefits of HRT with women who have menopausal symptoms,” the researchers write