Sign up for Health Research Trax – your research network provided by Western Alliance.Register Now

Embedding research in health care

It is not an unreasonable assumption for those going to see their GP or going to a hospital for a procedure to expect that the clinicians looking after them are suitably trained and qualified. They may also expect that the training given and, in particular, the advice about the effectiveness of any particular intervention, is based on sound evidence.

It is perhaps surprising, then, to discover that a great deal of what we take for granted as being ‘the best practice’ actually may have little supporting evidence.

We should not be alarmed by this unnecessarily, and certainly not be running to seek advice from alternative practitioners, for whom such evidence is even more flimsy. Indeed, we must be clear about what we mean by ‘evidence’.

Levels of evidence

In the clinical world we have developed a grading scheme for the kinds of evidence we believe to be most reliable when deciding whether a particular procedure or drug is effective. This is illustrated in Figure 1, with personal opinions having the lowest level (level 5) and synthesis of many clinical trials that all point to something being effective being the highest (level 1).

In an ideal world we would be able to point to level 1 evidence for every procedure undertaken in a hospital. Sadly, this is not currently the case.

In a recent review of procedures in intensive care, Prof Steve Webb, an intensive care specialist and prominent clinical researcher, was able to identify only 56 procedures that had level 1 evidence; a further 34 were found to have level 2 evidence, while 26 had evidence at level 3 and 2036 evidence at level 4 or 5.


Figure 1: Study types and levels of clinical evidence (source: NHMRC)

This is not to say that any of the 2036 procedures with level 4 and 5 evidence are unsafe, and indeed many have been used for decades with no evidence of ill effect. However, these procedures may be less effective than others, and often there is more than one way to treat a person’s illness – there may be competing evidence that one method is better than the others.

Currently, the only way to resolve what appears to be an ‘equipoise’ (parity, equality) between the possible alternatives is to subject them to a randomized trial, ideally in a double-blind fashion whereby neither the patient nor the practitioner knows which of the treatments in the trial is being given to any given participant.

Clinical trials for everyone?

Since clinical trials are deemed to be the best method we have available to resolve the question of what is truly the best treatment for a given condition, why don’t we do more trials?

The issue is that our framework for undertaking clinical trials is one suited to testing drugs on behalf of pharmaceutical companies. Since this is inherently risky, the checks and balances must be tightly monitored, and this is expensive.

In contrast, ‘head to head’ testing of two established treatments used in everyday practice could be done in a more streamlined and efficient way. However, hospitals typically do not have staff dedicated to doing this, and the ethical and governance systems are not sufficiently flexible to adapt to lower-risk pathways.

The paradoxical outcome, then, is that the studies that ought to be done are not being done.

An opportunity for change

Recently, a group of clinicians and scientists who have been involved in not-for-profit public good clinical trial networks have established the Australian Clinical Trials Alliance to champion the need to invest in hospital and primary care based research.

Picking up the challenge identified by the Strategic Review of Health and Medical Research in Australia(known as the McKeon review) published in 2012, ACTA has sought ways to embed research into the very fabric of clinical service delivery.

Indeed, ACTA’s mission is to “…promote effective and cost‐effective healthcare in Australia through investigator‐initiated clinical trials and clinical quality registries that generate evidence to support decisions made by health practitioners, policy‐makers, and consumers.”

A disruptive way to think of trials

Following on from the commitment to engage in clinical research, SJGHC has recently explored an emerging model of research known as ‘Randomised Embedded Multifactorial Adaptive Platform’ (REMAP ) trials.1

REMAP is a means by which to merge elements of continuous quality improvement: the process of analysing and responding to unwarranted variations in clinical care, with clinical trial methodologies.

Quality improvement is a vital part of health care but is subject to bias when performing analysis. The clinical trials framework is the ideal way to remove the influence of bias.

Prof Webb and his colleagues at the ANZ Intensive Care Society (ANZICS) have recently received funding from the National Health and Medical Research Council (NHMRC) to perform such a trial in Australia on community acquired pneumonia (CAP) in intensive care.

This trial, run through the Monash University clinical trials centre at the Alfred Hospital, will randomly assign several treatment options to each person at the same time. The innovation in the design is that data will be constantly evaluated throughout the study, rather than being assessed only at the end of the study.

This will enable individual patients to be preferentially assigned to treatments that appear to be working better for them than others.

If this turns out to be effective, then fewer people will be put on inferior treatments. It also means that, over time, more and more people are likely to be treated in the favourable part of the study, and this, then, will become the new standard of care – solving the problem of how to implement the findings of a study after it is completed.

Benefits for the Western Alliance

Several of Western Alliance’s partner organisations are already a part of the ACTA network and have formally embraced the mission.

At St John of God Health Care’s Victorian hospitals there is a commitment to facilitate clinical trials, following the lead by their Western Australian counterparts.

St John of God Health Care is dedicated to the use of health analytics, as stated in its Strategic Priorities 2015–19, as well as to growing research, teaching and training. A great deal of this activity in Victoria will be through partnerships with other health care providers, primary care and with universities and other providers of health workforce training.

Through a visible support from the organisation, it is hoped that grassroots research aimed at creating evidence that can ultimately contribute to the highest levels of evidence will underpin the trust granted to us by our patients and communities to look after them.


  1. Angus, DC (2015) Fusing randomised trials with big data. The key to self-learning healthcare systems. JAMA, 314(8): 767–8.