Monthly Archives: April 2017

Hormone therapy for prostate cancer may increase risk of depression

Hormone therapy for prostate cancer may increase risk of depression

Brigham and Women’s Hospital, 04/14/2016

A new study led by researchers at Brigham and Women’s Hospital (BWH) has found a significant association between depression and patients being treated for localized prostate cancer (PCa) by androgen deprivation therapy (ADT). The findings were published online in the Journal of Clinical Oncology on April 11, 2016. “We know that patients on hormone therapy often experience decreased sexual function, weight gain and have less energy – many factors that could lead to depression. After taking a deeper look, we discovered a significant association between men being treated with ADT for PCa and depression,” says senior author Paul Nguyen, MD. “This is a completely under–recognized phenomenon. Around 50,000 men are treated with this therapy each year. It’s important not only for patients to know the potential side effects of the drugs they’re taking, but also for the physicians to be aware of this risk in order to recognize signs of depression in these patients and refer them for appropriate care,” says Nguyen, who is also the director of Prostate Brachytherapy at BWH. “Patients and physicians must weigh the risks and benefits of ADT, and this additional risk of depression may make some men even more hesitant to use this treatment, especially in clinical scenarios where the benefits are less clear, such as for intermediate–risk disease.” Researchers reviewed data from the SEER Medicare–linked database from 1992 to 2006 of 78,552 men over the age of 65 with stage I to III PCa. When compared to patients who did not receive the therapy, researchers found that the patients who received ADT had higher incidences of depression and inpatient and outpatient psychiatric treatment. Adjusted analyses demonstrated that patients who received ADT had a 23 percent increased risk of depression, a 29 percent increased risk of inpatient psychiatric treatment, and a non–significant 7 percent increased risk of outpatient psychiatric treatment when compared with patients not being treated with ADT. The risk of depression increased with the duration of ADT, from 12 percent with less than six months to 26 percent from 7 to 11 months of treatment, to 37 percent with patients being treated for 12 months or longer. A similar duration effect was seen for inpatient and outpatient psychiatric treatment.

The Cost of Not Taking Your Medicine

The Cost of Not Taking Your Medicine – The New York Times <!–

<!–esi

–> <!–

require.map[‘*’][‘foundation/main’] = ‘foundation/legacy_main’;

https://a1.nyt.com/assets/article/20170414-130747/js/foundation/lib/framework.js

More important, it explains why so many patients don’t get better, suffer surprising relapses or even die when they are given drug prescriptions that should keep their disorders under control.

Studies have shown that a third of kidney transplant patients don’t take their anti-rejection medications, 41 percent of heart attack patients don’t take their blood pressure medications, and half of children with asthma either don’t use their inhalers at all or use them inconsistently.

“When people don’t take the medications prescribed for them, emergency department visits and hospitalizations increase and more people die,” said Bruce Bender, co-director of the Center for Health Promotion at National Jewish Health in Denver. “Nonadherence is a huge problem, and there’s no one solution because there are many different reasons why it happens.”

For example, he said parents often stop their children’s asthma treatment “because they just don’t like the idea of keeping kids on medication indefinitely.” Although a child with asthma may have no apparent symptoms, there is underlying inflammation in the lungs and without treatment, “if the child gets a cold, it can result in six weeks of illness,” Dr. Bender explained.

When Dr. Lisa Rosenbaum, a cardiologist at Brigham and Women’s Hospital in Boston, asked patients who had suffered a heart attack why they were not taking their medications, she got responses like “I’m old-fashioned — I don’t take medicine for nothing” from a man with failing kidneys, peripheral vascular disease, diabetes and a large clot in the pumping chamber of his heart. Another common response: “I’m not a pill person.”

When Dr. Rosenbaum told her hairdresser that she was studying why some people with heart disease don’t take their medications, he replied, “Medications remind people that they’re sick. Who wants to be sick?” He said his grandmother refuses to take drugs prescribed for her heart condition, but “she’ll take vitamins because she knows that’s what keeps her healthy,” so he tells her that the pills he gives her each night are vitamins.

Other patients resist medications because they view them as “chemicals” or “unnatural.” One man told Dr. Rosenbaum that before his heart attack, he’d switched from the statin his doctor prescribed to fish oil, which unlike statins has not been proved to lower cholesterol and stabilize arterial plaque.

“There’s a societal push to do things naturally,” she said in an interview. “The emphasis on diet and exercise convinces some people that they don’t have to take medications.”

Dr. Bender said, “People often do a test, stopping their medications for a few weeks, and if they don’t feel any different, they stay off them. This is especially common for medications that treat ‘silent’ conditions like heart disease and high blood pressure. Although the consequences of ignoring medication may not show up right away, it can result in serious long-term harm.”

Some patients do a cost-benefit analysis, he said. “Statins are cheap and there’s big data showing a huge payoff, but if people don’t see their arteries as a serious problem, they don’t think it’s worth taking a drug and they won’t stay on it. Or if they hear others talking about side effects, it drives down the decision to take it.”

Cost is another major deterrent. “When the co-pay for a drug hits $50 or more, adherence really drops,” Dr. Bender said. Or when a drug is very expensive, like the biologics used to treat rheumatoid arthritis that cost $4,000 a month, patients are less likely to take them or they take less than the prescribed dosage, which renders them less effective.

Dr. William Shrank, chief medical officer at the University of Pittsburgh Health Plan, said that when Aetna offered free medications to patients who survived a heart attack, adherence improved by 6 percent and there were 11 percent fewer heart attacks and strokes, compared with patients who paid for their medications and had an adherence rate of slightly better than 50 percent.

“There are so many reasons patients don’t adhere — the prescription may be too complicated, they get confused, they don’t have symptoms, they don’t like the side effects, they can’t pay for the drug, or they believe it’s a sign of weakness to need medication,” Dr. Shrank said. “This is why it’s so hard to fix the problem — any measure we try only addresses one factor.”

Still, there is hope for improvement, he said. Multiple drugs for a condition could be combined into one pill or packaged together, or dosing can be simplified. Doctors and pharmacists can use digital technology to interact with patients and periodically reinforce the importance of staying on their medication.With fear of side effects a common deterrent to adherence, doctors should inform patients about likely side effects when issuing a prescription. Failing that, patients should ask: “What, if any, side effects am I most likely to encounter?

Forgetting to take a prescribed drug is a common problem, especially for those ambivalent about taking medication. Patients can use various devices, including smartphones, to remind them to take the next dose, or use a buddy system to make adherence a team sport. Dr. Shrank suggested making pill-taking a habit, perhaps by putting their medication right next to their toothbrush

what is the common cold and how do we get it?

Health Check: what is the common cold and how do we get it?

June 20, 2016 2.22pm AEST

Disclosure statement

Peter Collignon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

 

The “common cold” is common. Most of us will have at least one or two per year. Children get sick more often and very young children often get more than five colds per year.

Even though it’s so common there’s a lack of good research looking into this infection and ways to prevent and treat it.

What is a ‘cold’?

The common cold is caused by viruses. It’s generally an acute self-limiting infection, which means it comes on quickly and resolves by itself. It involves our upper respiratory tract and airways (nose, throat, pharynx and larynx).

The incubation period after picking up the virus is usually about two days before our symptoms start. The illness then often lasts for five to 10 days.

You are likely to be contagious while you have symptoms but you’re most contagious in the early part of illness (the first few days). Once your body effectively fights the infection, the numbers of the virus in your body will drop off and you will recover. We recover from these viral infections when we develop immunity to the strain infecting us by sending white cells to kill the virus and making antibodies active against it.

Sore throat, malaise, cough, sneezing and running noses are the more common symptoms. Headaches, fevers, body aches and severe tiredness are relatively less common.

Some of its signs and symptoms can overlap with other conditions such as hayfever, although the latter is usually without a sore throat.

How do we catch colds?

It was, and is still thought by many, that exposure to cold temperatures, especially in winter caused the common cold. By itself this does not appear to be true. The common cold is caused by viruses. One needs to catch one of these viruses to get a cold – just exposure to low temperatures won’t do it.

However when it’s cold and wet outside we are more likely to be indoors and be in more crowded places including with other people who may have a cold. Hence cold weather does make it more likely that we will catch a cold, especially if we are near people who have one.

We usually catch colds via the hands. from http://www.shutterstock.com

Its common association with winter and cold weather likely contributed to its name. Whether cold weather and lowered humidity also contribute to the transmission of these viruses is still unclear.

We often believe most of us get these types of respiratory infections because we breathe in the viruses that cause them. The viruses are present in aerosols and droplets so when people cough and sneeze, we can inhale them. However what now looks more likely, is that most of us get these infections via our hands.

We often touch contaminated surfaces. Then we infect ourselves when our hands touch our mouth, nose and/or eyes. So it’s by our hands that we most often “catch” a cold. This is also why good hygiene, along with regular hand-washing with soap and water or with alcohol-based solutions not only decreases our chances of catching a cold but helps protect those around us as well.

What causes colds?

The common cold is caused by quite a few different viruses – not just one. Rhinoviruses are the most common cause. These are small RNA viruses named after the nose “rhino” and which grow best at temperatures found in the nose 33-35°C.

Other common causes are coronaviruses (RNA viruses that under a microscope look to have a crown or halo) and influenza viruses. A host of other viruses including RSV, parainfluenza virus, metapneumovirus and adenovirus are some of the other causes.

All the viruses cause the same type of symptoms. Therefore you can’t tell by your symptoms which virus is making you unwell. You can usually only tell which virus it is by sophisticated molecular pathology testing on a sample from your respiratory tract.

It’s commonly stated rhinovirus and common cold symptoms are different to influenza (the flu). However all the viruses mentioned can cause an influenza-like illness. Most people infected with influenza virus have only mild symptoms or are asymptomatic. Many infected with influenza have exactly the same symptoms as those infected with rhinovirus.

This is why it’s usually not possible for you, or a doctor, to tell whether you are suffering from a cold or flu. The term “ILI” or “influenza-like illness” is also used for colds, especially where there is also fever.

While influenza might sometimes be associated with more severe respiratory infections, rhinovirus gives it a good run for its money. During most winters, rhinoviruses cause more cases of people needing admission to hospital with pneumonia than do influenza viruses.

What can we do about them?

The common cold is caused by viruses so antibiotics do not work and should be avoided. They won’t kill the causative viruses but just give the individual side effects and needlessly help cause antibiotic resistance to rise as well.

There are few good studies that have looked at what works for the common cold. Many things don’t work. As well as antibiotics, vitamin C, vitamin D and echinacea also don’t work.

Echinacea doesn’t work to fight or prevent colds. from http://www.shutterstock.com

Antihistamines don’t work by themselves but may be of benefit if combined with a decongestant and analgesic. Decongestants are only of small benefit if used by themselves. An anti-asthma drug (ipratropium) if used intranasally probably benefits in the relief of some symptoms.

Over-the-counter cough treatments and vapour rub don’t seem to be of benefit. Anti-inflammatories such as aspirin and ibuprofen help relieve pain and aches but don’t seem to benefit control of other symptoms.

Paracetamol helps fever as well as pain, but it doesn’t work as well as ibuprofen for fever control. Humidified air, nasal irrigation and ginseng are all of unclear benefit. Oral zinc as lozenges does appear to be of some benefit. Honey might help but so does chicken soup – especially if made by your mother!

 

HRT reduces deaths from Dementia and Alzheimer’s disease.

J Clin Endocrinol Metab. 2017 Mar 1;102(3):870-877. doi: 10.1210/jc.2016-3590.

Lower Death Risk for Vascular Dementia Than for Alzheimer’s Disease With Postmenopausal Hormone Therapy Users.

Abstract

Context:

There are conflicting data on postmenopausal hormone therapy (HT) and the risk of vascular dementia (VD) and Alzheimer’s disease (AD).

Objective:

We analyzed the mortality risk attributable to VD or AD in women with a history of HT use.

Design, Patients, Interventions, and Main Outcome Measures:

Finnish women (n = 489,105) using systemic HT in 1994 to 2009 were identified from the nationwide drug reimbursement register. Of these women, 581 died of VD and 1057 of AD from 1998 to 2009. Observed deaths in HT users with <5 or ≥5 years of exposure were compared with deaths that occurred in the age-standardized female population. Furthermore, we compared the VD or AD death risk of women who had started HT at <60 vs ≥60 years of age.

Results:

Risk of death from VD was reduced by 37% to 39% (<5 or ≥5 years of exposure) with the use of any systemic HT, and this reduction was not associated with the duration or type (estradiol only or estradiol-progestin combination) of HT. Risk of death from AD was not reduced with systemic HT use <5 years, but was slightly reduced (15%) if HT exposure exceeded 5 years. Age at systemic HT initiation (<60 vs ≥60 years) did not affect the death risk reductions.

Conclusion:

Estradiol-based HT use is associated with a reduced risk of death from both VD and AD, but the risk reduction is larger and appears sooner in VD than AD.

Flu vaccine won’t definitely stop you from getting the flu, but it’s more important than you think

­

Flu vaccine won’t definitely stop you from getting the flu, but it’s more important than you think

April 12, 2017 6.15am AEST

As we head towards a southern hemisphere winter, many people are wondering if it’s worth getting the flu vaccine.

Generally speaking, if you are vaccinated, you’re less likely to get the flu. But that’s not the whole story.

For most healthy people, it’s about considering the cost and a few seconds of pain against the possibility that you’ll need to take time off work and endure a few days of misery due to infection.

For people who come into contact with vulnerable people – like the elderly, young or sick – getting vaccinated reduces the risk that you can pass it on.

For vulnerable people, the flu can be the difference between being at home with a chronic disease, and being in hospital with complications such as bacterial pneumonia.

When you should get vaccinated is a bit like playing the lottery. If you are vaccinated too early, there’s the risk it doesn’t work when you most need it; too late and you may get the flu while unprotected, or forget to have it before flu season hits.

Here’s what you need to know when deciding whether to get vaccinated, and when.

Preventing influenza

People who get vaccinated are at lower risk of getting influenza than those who are not. They are less likely to be laid up in bed with sweats, shivers and muscle aches, and take time off work or their usual activities, or be hospitalised with complications.

The Australian government recommends everyone from six months old be vaccinated, with those in the following higher-risk categories eligible for a free shot in 2017:

  • people aged 65 years and over
  • Aboriginal and Torres Strait people aged six months to less than five years
  • Aboriginal and Torres Strait Islander people who are aged 15 years and over
  • pregnant women
  • people aged six months and over with medical conditions, like severe asthma, lung or heart disease, low immunity or diabetes that can lead to complications from influenza.

The mild symptoms that some people get after vaccination are usually related to the vaccine generating an immune response. This is how vaccines work – by “training” the immune system to recognise parts of the influenza virus, it can respond more effectively when it encounters the real thing. There is no “live virus” in the flu shot. Your body responds to parts of the flu virus in the vaccine; you cannot “catch the flu” from it.

All brands of flu vaccine available in Australia are safe; researchers are continuing to monitor for any side-effects week-by-week using SMS feedback from people who have been recently vaccinated.

Like all medications, the flu vaccine carries with it a small risk of side effects, like temporary soreness at the injection site.

The flu vaccine doesn’t provide complete protection

Most clinical trials that have looked at how effective the flu vaccine is were performed in healthy adults and children. However, the people for whom we strongly recommend flu vaccine are those who are older and with chronic illnesses. Unfortunately the vaccine doesn’t elicit as strong an immune response in these groups. They are targeted for vaccination because of the high risk of complications.

In Australian studies, we generally estimate the risk of influenza is reduced by about 40-50% in people who receive the vaccine.

While this might seem low, reducing the risk of infection by half is worth the effort.

There are a number of different strains of influenza, which are categorised into types, subtypes and strains. For example, one of the four strains in the 2017 vaccine is called A/H3N2/Hong Kong/4801/2014, which refers to an influenza A type, a H3N2 subtype (flu viruses are defined and named by proteins on their surface, haemagglutinin – H, and neuraminidase – N), and a strain first isolated in Hong Kong in 2014.

In a typical year, there are usually three subtypes (in varying proportions) of the influenza circulating that cause disease. Except in pandemic years, the circulating strains are usually variants of the previous season’s strains, and this allows the World Health Organisation to make recommendations on which strains should go into the next season’s vaccine.

Occasionally, the vaccine strains aren’t well matched to circulating strains. This risk of mismatch has been reduced by the quadrivalent vaccine that contains four strains.

Protection from the flu doesn’t last that long

In most of Australia, the peak flu season usually runs from August to September.

But the flu vaccine produces a relatively short-lived immune response, about 6-12 months after vaccination. This is because the flu vaccine produces a weaker immune response than being infected.

How long it provides protection probably depends on the patient (some studies show elderly patients have a shorter immune response) and the virus (some influenza subtypes elicit a stronger immune response than others).

So there is some concern that if people are vaccinated too early in the year, their immune response might be starting to decline just when it is needed.

Studies that have looked at how important this is have shown conflicting results. While one study found the effectiveness of the vaccine against the A/H1N1 and A/H3N2 strains declined after three months, the other study found a decline only against A/H3N2 and B strains.

In the meantime, we generally recommend April to June is probably the optimal time for vaccination – early enough for your immune system to “learn” how to deal with the influenza virus for the peak flu season, but not so late you miss the peak flu season.

For doctors, there are other factors involved in deciding when to vaccinate a patient. If they don’t vaccinate a patient now, will they come back again before the influenza season hits? Are they are risk of getting influenza “out of season”?

Although most flu cases occur in winter, we are increasingly aware of cases that occur throughout the year. This is particularly important in tropical regions where influenza tends to circulate all year round.

‘Hunger hormone’ turns eating less into eating more

‘Hunger hormone’ turns eating less into eating more

University of Southern California Health News, 12/17/2015

If scientists can suppress ghrelin’s activity in the brain, they may be able to cut down on the desire to overeat. Looking to avoid overeating during those big holiday meals? You might want to avoid fasting in the days beforehand. Cycles of food restriction unleash a “hunger hormone” that increases the capacity to eat more before getting full, according to laboratory research by USC researchers. The insights published in the journal eLife could be valuable for helping the researchers develop new weight–loss therapies. Kanoski, doctoral student Ted Hsu and their colleagues conducted their studies in rats, but the work could have implications for humans. The researchers found that when they limited the time rats had access to food every day, the rats gradually were able to double their food intake to compensate. Over several days, the scientists allowed rats to eat only during a four–hour window, followed by 20 hours without food. The repeated short fasts sparked the hormone ghrelin to go into action before the anticipated feeding time. That hormone reduced rats’ feeling of fullness when they were eating, so they could eat more. The hormone’s action makes sense as an adaptive response: To get through times of scarcity, the brain enables the body to take in more calories during times of plenty. But that response isn’t so relevant in the well–fed Western world anymore, Kanoski said. “Instead, we need to find new ways to help us fight some of the feeding responses we have to external cues and circadian patterns.” The USC team’s study provides a rare look at the way ghrelin communicates with the central nervous system to control how much food is consumed.

Preventing urinary tract infections after menopause without antibiotics

PREVENTING URINARY TRACT INFECTIONS AFTER MENOPAUSE WITHOUT ANTIBIOTICS

Marta Caretto

,

Andrea Giannini

xCorrespondence

  • Corresponding author at: Division of Obstetrics and Gynecology, Department of Clinical and Experimental Medicine, University of Pisa, Italy.

Highlights

  • Recurrent urinary tract infections are common in older women. The majority of recurrences are reinfection from extraurinary sources, such as the rectum or vagina.
  • Vaginal estrogens reduce the incidence of urinary tract infections.
  • Probiotics, cranberry extracts and D-mannose reduce the risk of urinary tract infections.
  • Further wide-scale randomized studies are needed to establish the role of estrogen therapy, probiotics and lactobacilli, as well as to identify other methods to reduce the use of antibiotics.

Abstract

Urinary tract infections (UTIs) are the most common bacterial infections in women, and increase in incidence after the menopause. It is important to uncover underlying abnormalities or modifiable risk factors. Several risk factors for recurrent UTIs have been identified, including the frequency of sexual intercourse, spermicide use and abnormal pelvic anatomy. In postmenopausal women UTIs often accompany the symptoms and signs of the genitourinary syndrome of menopause (GSM). Antimicrobial prophylaxis has been demonstrated to be effective in reducing the risk of recurrent UTIs in women, but this may lead to drug resistance of both the causative microorganisms and the indigenous flora. The increasing prevalence of Escherichia coli (the most prevalent uropathogen) that is resistant to antimicrobial agents has stimulated interest in novel non-antibiotic methods for the prevention of UTIs. Evidence shows that topical estrogens normalize vaginal flora and greatly reduce the risk of UTIs. The use of intravaginal estrogens may be reasonable in postmenopausal women not taking oral estrogens. A number of other strategies have been used to prevent recurrent UTIs: probiotics, cranberry juice and D-mannose have been studied. Oral immunostimulants, vaginal vaccines and bladder instillations with hyaluronic acid and chondroitin sulfate are newer strategies proposed to improve urinary symptoms and quality of life. This review provides an overview of UTIs’ prophylaxis without antibiotics, focusing on a practical clinical approach to women with UTIs.

Epigenetics: phenomenon or quackery?

Epigenetics: phenomenon or quackery?

July 24, 2015 11.40am AEST

Family resemblance isn’t only down to genes, but also to the influence of the environment on those genes. Mitchell Joyce/Flickr, CC BY-NC

Disclosure statement

Jeffrey Craig receives funding from the Australian National Health and Medical Research Council and the Financial Markets Foundation For Children. He is affiliated with the Australian Epigenetic Alliance and the International Society for the Developmental Origins of Health and Disease.

Are you really what your mother ate, drank or got stressed about? The simple answer is “no”, but not in the way you think.

We are products of nature via nurture. Our genes and environments interact. And “environment” can be what we are experiencing now or at any time during our life.

An overwhelming body of evidence, from both humans and other animals, has shown that the environment we experience in the first 1,000 days of life influences our risk of chronic diseases: conditions such as heart disease, diabetes, psychiatric disorders and some cancers.

Changes to epigenetics – molecules that lie literally “on top of genes” – have been implicated as a possible mechanism by which early environment (nurture) can leave a long-term change in the risk for chronic disease.

Nature, meet nurture

In a recent article in The Guardian, Adam Rutherford argued that the term “epigenetics” is now being abused by pseudoscientists in a similar way to “quantum” and “nano”. I’d like to argue that the term has not been misused any more than most scientific terms, bar the odd cosmetic product, or health store.

Although researchers sometimes disagree over the meaning of the word “epigenetics”, it can be best understood through its conceptual development over time.

Aristotle didn’t like the prevalent idea in his day that we all grow from a microscopic version of ourselves. He coined the term “epigenesis” to describe the developmental process whereby a complex organism develops, through successive stages, from a simple start. This is essentially what we know today as developmental biology.

More than 70 years ago, Conrad Waddington modified the word to “epigenetics” and described it as “the interactions of genes with their environment, which bring the phenotype [i.e. the set of observable characteristics of an individual] into being”; my favourite definition.

Fast forward to 1996. A handful of notable scientists had already begun to theorise about the molecular nature of epigenetics. These ideas were summed up by Arthur Riggs and colleagues who defined epigenetics as “the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence”.

Epigenetic changes would involve small molecules jumping onto our genes. They would stay there, hanging on even through cell division (mitosis), providing a long term epigenetic legacy. “Meiosis”, the cell division that results in eggs and sperm, implied that such changes could persist from one generation to the next.

Fast forward again to today. Most experts’ definition of epigenetics centres around these small molecules. And what have we done with these small molecules? We now have replicated at least two epigenetic biomarkers of environment from the cradle to the grave: stress and smoking.

We have clinical biomarkers for cancer prediction, diagnosis and prognosis currently in clinical trials. And we have strong evidence that sometimes, events such as stress and diet in one generation can affect the health and epigenetics of the next. Such effects have also been shown in humans.

Epigenetic legacies

Is this all non-Darwinian? Certainly not; Darwin will not be turning in his grave because he assumed that cells “throw off minute granules which are dispersed throughout the whole system”. These “gemmules” would be “collected from all parts of the system to constitute the sexual elements, and their development in the next generation forms the new being”.

Not entirely correct, but we do already have plausible (although not proven) set of epigenetic molecules that are candidates for such particles.

Rutherford, in his Guardian article, rightly pointed out that epigenetic legacies may not last for more than a couple of generations. This may be because that epigenetic state has evolved to responds to environments that may change every few generations. In the longer term, it has even been proposed that epigenetic change in response to environment may be “fixed” by a genetic mutation in the same gene that has a similar effect on its function.

I agree with Rutherford that much more work is needed. For example, the human studies of transgenerational effects on health that he showcases have not yet been linked to any specific epigenetic changes. Personally, I would draw the line at calling these effects “epigenetic” but wouldn’t go to war with someone who did.

Spot the snake oil

Epigenetics fascinates us all. Yes, we’d love to know whether diet, exercise and meditation really changed our genes. But to turn a handful of promising studies into a mountain of evidence or to fail to replicate such findings, will take time.

And we must never lose sight of genetics. After all, in around one fifth of genes studied, genetic sequence is a much stronger influence on epigenetic state than environment, and epigenetics and genetics combined are better able to explain disorders such as obesity.

And finally, can we begrudge the odd snake-oil salesman borrowing a technical term like “epigenetics? Maybe, but a quick search of Google shows that reports of true scientific articles on epigenetics far outnumber those with a pseudoscience flavour. I credit the public with sufficient intelligence to sort most of the wheat from the chaff.

Let the (informed) debate begin.

‘It’s your fault you got cancer’: the blame game that doesn’t help anyone

­

‘It’s your fault you got cancer’: the blame game that doesn’t help anyone

November 29, 2016 6.16am AEDT

These include various stereotypes – for example, the image of a typical breast cancer survivor portrayed sympathetically in books, on television and in public health campaigns – which have effectively narrowed our cultural understanding of cancer.

This means those with the disease are exposed to a range of cultural logics that misrepresent the experience of cancer, and can even induce distress and shame.

People judge

Cancer is a disease that not only brings worry and fear, but also clear messages about what we can do to prevent it. This is especially the case for the “lifestyle cancers”, caused by our own behaviours or lifestyle factors. Around 80% of lung cancer cases, for instance, can be attributed to smoking.

Social assumptions have important implications for people living with cancer. And while “taking ownership” in response to having cancer can be useful for some, it can lead others to question what they may have done wrong to get it.

In our recent study of 81 women with diverse forms of cancer, we found simplistic social understandings of the disease are often damaging for those living with it.

Women across our study articulated feelings of blame and judgement based on the reactions of others. Comments included:

You’re suddenly this person who deserves to die because you smoked.

She [doctor] turned around and she goes, “You did have symptoms and you didn’t come for a pap smear for a long time, what do you expect?”

Some women described being held up as examples of what not to do:

…even my daughter made a comment to the children… she said, “Nan’s got lung cancer, because of the smoking”… You feel guilty. You think you’ve done something really wrong.

Our work shows people have clear ideas about how much cancer patients contribute to their own illness.

This is especially so when it comes to lung cancer, most cases of which are caused by smoking; and in the case of cervical cancer and head and neck cancers, which can be associated with the sexually-transmitted human papillomavirus (HPV). The now recognised role of HPV adds another layer of potential blame to the development of cancer in women.

Breast cancer victims are typically portrayed sympathetically on television, in books and in health campaigns.

You ‘deserve’ it

Lung cancer in particular is surrounded by stigma. Australians show less sympathy for people with lung cancer than those with other forms. A recent report found out of 15 nations surveyed, Australians had the least sympathy for people diagnosed with lung cancer.

Such a climate can make people feel further stigmatised and ultimately responsible for their illness, as if they deserved to get it. This can make it challenging for them to then reach out or receive help, regardless of whether they smoked or not.

The judgement and stigma attached to “bad behaviours” and lifestyle diseases also discounts the fact of the social gradient of health – that wealth is a protective factor in developing cancer. People in higher economic positions have better access to care and private health coverage, and better health outcomes.

Australians in lower socioeconomic positions are left at greater risk of developing life-threatening illnesses.


Read more: Social determinants – how class and wealth affect our health


All this has seen a hierarchy of public and financial support for different forms of cancer emerge, in line with perceptions of deservedness. This has resulted in highly unequal support across cancer types.

Despite killing three times as many people as breast cancer, lung cancer receives only a fifth of the research funding. Reflecting on the attention divide, one woman with cervical cancer told us:

I know that there’s obviously not anywhere near as much [support] as there is [for] breast cancer and at the beginning of my treatment I was a bit offended by all of the pink stuff. I was like, “Enough of the breast cancer”.

We’re all human

Public health campaigns encourage us to regularly have cancer screening, eat healthily, exercise, quit or avoid smoking or drinking too much alcohol, and so on.

While it’s important to take care of ourselves, we need to remember that not all cancers are preventable. The fact is, one in two men and one in three women will develop cancer in their lifetime, often regardless of their life choices.

Think of the stories we hear of completely healthy people who develop brain cancer out of the blue, or relatives who smoked all their lives yet died at the age of 95 from old age. These stories remind us that cancer does not always develop in predictable ways.

For various reasons, a lot of public attention, support and generosity is given to people living with prominent, visible or less blameworthy illnesses. Yet others can experience feelings of shame, blame and illegitimacy because of some people’s reactions.

These unequal degrees of compassion need to be adjusted to support people regardless of how well understood or visible their illness is. The idea of certain cancers being a person’s responsibility undermines the need to provide support and care to all those living with the disease.