Total Pageviews

Wednesday 3 June 2015

The Australian Government shows how not to do research about how to treat diabetes

In an attempt to improve the treatment of diabetes, the Australian Government has completed a $34 million research project investigating the effectiveness of new methods of care coordination, enhanced through the use of technology. The research was not carried out by university or medical research institute based researchers but instead was coordinated by a consulting firm, McKinsey Pacific Rim. They worked with a range of health authorities, private companies, GP medicare locals and some academics in Victoria, New South Wales, South Australia and Queensland.

The nearly 3 year long project established that eHealth, care coordination and targeted funding produced only marginal outcomes in the control of diabetes. The group that received the most intervention achieved a 0.2% absolute fall in their levels of HbA1c, a longterm measure of blood sugar, which according to the study’s analysis, was statistically significant.

This was hardly a stunning result for such a large-scale project. The theory was that patients with diabetes were not receiving systematic care through GP coordinated care plans actioned through a team of health professionals. According to the report:

“In 2009–10, only 18 percent of Australians with diabetes had a claim made by their GP for an annual cycle of care. It has also been estimated that the relevant clinical guidelines are not followed in 37 percent of diabetes-related clinical encounters”

The study was supposed to be a randomised clinical trial (RCT) that divided patients into one of three groups. The “control” group continued to receive “normal” care. That is, the treatments they were receiving from their GP was not subjected to the “improvements” implemented by the study. The first treatment group (Group 1) had their care coordinated by an electronic care coordination system, along with improvements in the care based on information derived from that system. The second treatment group (Group 2) received the same treatment as Group 1 but in addition had a dedicated care coordinator and specific funding for treatment and incentive payments to health professionals in the care team.

Theoretically, the results of the 18-month study would show that any differences between the treatment groups and the control group were because of the added services and payments.

Right from the outset however, the study was flawed because the randomisation process was not, in fact, random.

It was the GP practices, rather than patients, that were allocated to the groups and this was done with an additional constraint that care coordinators needed to be able to coordinate patients from multiple practices in the same area. Drop outs from the study were uneven and this occurred after the randomisation process, again potentially skewing the baseline randomisation. The outcome of this process was that there were statistical differences between the groups at baseline. Problematically, the differences were in the types of diabetes (type 1 versus type 2), co-morbidities (other conditions separate from the diabetes), diabetes risk level, private health insurance and english language proficiency and ethnicity. In other words, the groups were not made up of the same types of patients.

When there are statistical differences between groups at the start of a study, it is next to impossible to say anything meaningful about statistical differences at the end of it. In this case, at the end of the study, the results showed no statistical difference in any of the measures between Group 1 and the control group. Group 2 showed a statistical compared with the control group in the improvement of blood sugar, but this difference was only 0.19%. Group 2 also showed some statistical but very small improvements in measures of depression and attitudes to self-management and continuity of care.

What was possibly remarkable about the results was that after 18 months, there were no (or very little) improvements in any of the groups in the area of weight, blood pressure and other biochemical measures. This is worrying considering that the patients at the start were largely obese and had high rates of high blood pressure, blood sugar and cholesterol.

What this indicates is that normal treatment practice, let alone any attempts to coordinate care, are largely ineffective in the management of diabetes in Australia.

If this study had been proposed through the normal grant funding review process of the NHMRC, it would never have been funded. For a start, $34 million is the scale of funding normally only requested for entire research centres rather than single research projects. More to the point, the study’s fundamental flaws would have been picked up on review and it would never have passed that scrutiny. There is a reason that peer-reviewed grant funding is held in such high regard in the scientific community and that is that it demonstrates that the project will achieve the outcomes it sets out to achieve on accepted scientific best practice. This study fell far short of this bar and demonstrated emphatically that uncompetitive researchers in government departments or consultant companies shouldn’t be conducting scientific research funded by the taxpayer.

http://theconversation.com/

Graham

4 comments:

Gingi said...

LOL, I like the title of this article. ;-) - www.domesticgeekgirl.com

Gingi said...

I like the title of this post, haha - www.domesticgeekgirl.com

Passthecream said...

Yes indeedy.

Having been the 'recipient' of one of these Au care plans at two different practices, at different times, I can vouch for the fact that they don't add up to much except for appointments to a couple of specialists such as an ophthalmologist and a foot care bloke in a spectacularly ad-hoc fashion. I rejected the dietary advice as completely inappropriate and just saw the whole situation as one which was a nice little extra money earner for the local practice and not much use to me.


The money spent on this consulting project is simply outrageous, staggering, and so blatant.

I hear that Norman Swan was trumpeting yet another piece of worthless mouse/diet/carb-endorsing research from Sydney Uni the other day. If that was also tax-payer funded, the situation is scandalous --- we should ask for our money back. Genuine research on e.g. hepatitis, malaria and other significant diseases is being strangulated here by lack of funds atm while these shonky nutritionists spend the research dollars on meaningless back-slapping self-congratulatory projects.

// end rant

... back to running around on all fours now and preening my whiskers.
C.

Lowcarb team member said...

Hi Gingi, thanks for your comment. Glad you liked the article's title - hope you enjoyed reading it too!

Hi Passthecream, and welcome, many thanks for leaving your thoughts and sharing your experience here ... it's appreciated.

All the best Jan