Wednesday, November 12, 2008

Evidence-based Policy and Public Sector Innovation

Occasional Papers

Written for the Victorian Government's
2008 Innovation Strategy "Innovation:  Victoria's Future."

A key driver of Victoria's ability to compete against other Australian states and international competitors will be the degree to which the Victorian government is itself innovative.  The creation of the right environment for innovation by the private sector should be the purpose of government innovation policy.  The next wave of reform must concentrate on improving government's own impact on innovation.  Whilst innovation in government can occur at multiple levels, the core argument of this paper is that the Victorian Government should transform how it creates, implements and evaluates policy towards a more systematic and robust model -- one that is firmly grounded in evidence.  The comprehensive implementation of evidence-based policy will ensure that the public sector policy has the tools and approach to deliver a positive regulatory environment.  Embracing a science-based and entrepreneur friendly approach to risk, particularly in regards to regulation, will maximise innovation within both government and the private sector.


Modernising government is about the ongoing need to better respond to citizens' needs

-- Tony Blair

Government remains a large and important part of the Victorian economy and society.  Irrespective of how one envisages the role of government, it is uncontroversial to assert that the actions of government have a major impact on the prosperity and welfare of Victorians.  A key driver of Victoria's ability to compete against other Australian states and international competitors will be the degree to which the Victorian government is itself innovative.

Innovation in government can occur at multiple levels.  There is the direct policy area of innovation -- where government seeks to influence non-government innovation.  However, there is also the whole-of-government approach to policymaking that itself can help or hinder innovation.  This paper is primarily about that second kind of innovation in government -- how government itself behaves, across all portfolio areas.

The core argument of this paper is that the Victorian government should transform how it creates, implements and evaluates policy towards a more systematic and robust model and one that manages risk rather than seeks to abolish it.  The creation of the right regulatory environment for innovation by the private sector should be the purpose of government innovation policy.

Previous waves of economic reform have delivered large benefits for all Australians.  Tariff reductions, the licensing of foreign banks, the privatisation of some nationalised assets and the whole National Competition Policy program profoundly changed the way Australian businesses operate.  The major change to government from this process was the move from public to private of many functions better operated in a competitive market which also brought significant benefits.  However, the first waves of reform were primarily non-government reform.  The next wave of reform must reform must concentrate on improving government's own impact on innovation.

Whilst the private sector drives most innovation in an economy, public institutional arrangements can determine the bounds of the private sector's outcomes.  For example, The Pharmacy Guild supports the lack of competition in the pharmacy sector but relies on government regulation for that monopoly protection.  There remain key areas where regulation hinders the development of competitive outcomes and therefore innovation;  often the private institutions that receive monopoly rights support these regulations.

In addition to existing regulation, mechanisms used by the public sector to develop and implement policy can also stymie private sector innovation.  Successive waves of public sector reform have sought to move government policymaking processes towards putting a greater emphasis on more robust mechanisms for policy analysis.  The introduction of benefit-cost analysis is one technique that has gained in popularity in recent decades.  However, important as such techniques are, they are of little use if the inputs into their models are wrong.  Before a policymaker can even begin to answer a policy challenge with a specific response, the first step must be to understand what works and what does not.  To do this requires evidence.

While evidence-based policy appears uncontroversial, its implementation in other jurisdictions, notably the UK, has proved highly contentious.  An existing program may be shown to have no or little effect so existing stakeholders are often highly resistant to believing "outside" research and often have pecuniary or other interests which cause them to defend the status quo.  It is naive to assume that when designing new policy processes increasing the rigour in policy research to generate more robust evidence will naturally lead to greater evidence-based policy.  Instead, government must mandate the adoption of evidence-based policy in all policy areas.  Similar to the role of the Victorian Competition and Efficiency Commission (VCEC) in evaluating benefit-cost statements, a central agency is required to standardise the elements of evidence-based policy.

How the public service develops policy has an impact not only on the types of policy options brought forward, but also on the way in which non-government actors interact with government.  Evidence-based policy acts as a bulwark against special-interest pleadings made at the expense of the broader community.  More generally, the policy processes and the approach to regulation adopted by government affect the chances of a positive regulatory outcome occurring.

Central to the implementation of evidence-based policy formation is the prioritisation of a research-friendly policy environment.  This requires the widest possible collection and dissemination of relevant data so that researchers within the public sector, academia and the private sector have the tools to test policy options.  Just as competition drives private sector innovation, multiple sources of policy ideas enhance innovation in policy formation.  Actors outside government can only participate in this process if the data to do so is available to them.  To ensure this occurs across all policy areas a high degree of coordination by a central agency will be required.  A Victorian Bureau of Statistics offering a one-stop shop for all Victorian government data and general statistics not available from the ABS will enable researchers to develop improved policies.

The impact of regulatory settings can go beyond the first order (direct consequences).  For example, access to overly generous compensation payments, designed to help the injured, can encourage rather than discourage overly risky behaviour.  Increasingly what is driving regulation -- of business, health, liability, etc. -- is avoidance of risk.  As the pace of change in the world seems to accelerate risk avoidance can breed insecurity and with that insecurity comes a response to regulate to eliminate risk that is both impossible and undesirable.

Managing risk is well understood as a key component of the capacity to innovate in the private sector.  The ability to manage risk is based on a capacity to properly identify the upside and downside of relevant risks and to accurately estimate the costs of favourable and unfavourable outcomes.  Government has three vital roles to play in enabling risk-taking by entrepreneurs and scientists.  First, it is the role of government to provide the regulatory environment that ensures innovators can benefit financially from risk-taking;  this requires strong property rights enshrined in law and adhered to in practice.  Second, there is a large role to play in both educating the public about managing risk and providing leadership in striking an appropriate balance between risk and progress.  An example of this is the repeal of the current moratorium on commercially growing biotech crops.  Third, within government, rigorous science-based risk assessment of environmental, health and other appropriate programs, both existing and proposed, is required.  The aim of such reviews should not be to "gold plate" regulation in an attempt to eliminate risk;  instead, the goal should be the lightest touch regulation possible:  regulation that only affects those doing the wrong thing.

The conclusion of this paper is the comprehensive implementation of evidence-based policy and the embracing of a science-based and entrepreneur friendly approach to risk will maximise innovation within both government and the private sector.  Through innovation, Victoria will maximise economic growth and therefore provide the highest standard of living for all Victorians.


Put simply, evidence-based policy is policy based on evidence of its efficacy. (1)  This use of the word evidence is the scientific one, where evidence distinguishes data from theory. (2)

What constitutes evidence can be summarised as "the best current knowledge". (3)  The aim of evidence-based policy is to provide a systematic way of finding out "what works" and then implementing it.  The growth of evidence-based policy came from the increasing adoption of evidence-based medicine and its promotion by organisations such as the Cochrane Collaboration. (4)  The formal adoption of evidence-based policy is most advanced in the UK where then prime minister Blair's statement -- "what counts is what works" (5) -- became a cornerstone of the Blair Government's attempts to modernise government policy processes.

As a result of the explicit program in the UK to introduce evidence-based policy, what constitutes evidence is often drawn from UK definitions:

Expert knowledge, published research, existing statistics;  stakeholder consultations;  previous policy evaluations;  the Internet;  outcomes from consultations;  costings of policy options;  output from economic and statistical modelling. (6)

As can be seen from this extensive list, evidence is not merely research-based knowledge.  The best current knowledge on an issue may be practitioner or expert knowledge.  It may be the application of a model to existing data or results from stakeholder consultations.  The key to the adoption of evidence-based policy is the critical appraisal of all available evidence in a thorough and reflective way.  This differs from traditional conceptions of policy evaluation that typically focus on the intervention rather than the effect.  For example, a traditional evaluation might ask, "Did the program come within budget" or "were 20,000 home visits made" whereas a systematic review might ask "are more of the troubled teens who were visited in school than those who weren't visited"?

Many authoritative practitioners grade different types of evidence into a formal hierarchy. (7)  At the pinnacle of the hierarchy are systematic reviews of several other studies;  at the bottom are case studies.

Figure 1:  Hierarchy of evidence types

  • Systematic review:  Systematic reviews evaluate several studies via preselected criteria.  Systematic reviews deliberately seek to identify and minimise sources of error and bias.
  • Randomised trials:  Participants are divided into two groups based on a coin toss.  One group receives the intervention while the other does not or receives a separate intervention.  Compared to all the research types below, this type of study has far fewer biases;  however, it may still suffer from attrition problems.
  • Cohort or panel studies:  This type of research either compares groups that have received a particular intervention with those that have not or compares a result with the incidence of a theorised cause.  These studies suffer from bias due to attrition.  Also the theorised causes might not be causal.
  • Before-and-after studies:  In these studies group A is compared to a group B matched based on various criteria, e.g. social background, academic achievement.  It is often difficult to create adequately matched groups, and the matching criteria may not be relevant to the research question under consideration.
  • Benchmarking:  This type of research compares results from two or more separate groups or studies.  The comparison may be made using different methods.  These studies suffer from selection bias.
  • Expert statements:  This type of research involves the collection of statements, interviews, etc.  Experts are often not experts on the precise question being asked.  This practice leads to answers no better than ones given by non-experts.  Also the choice of expert is highly charged and it often introduces bias.
  • Case studies:  Despite the popularity of case studies, particularly within education research, generalisations cannot be made based on this methodology.  As a policy formulation tool, the case study can easily be manipulated to fit a pet theory.

All of the approaches in the pyramid above are considered evidence.  Policy processes based on any of the evidence above is an improvement on institutional inertia ("we've always done it this way") (8)  or ideological fervour ("public housing is better than owner-occupied suburbia").  Experimental research, involving randomised trials is regarded as the most robust of single studies. (9)  The implications for policymaking from this are discussed below.  It is sufficient at this point to note that most empirical researchers are very wary of results from quasi-experimental evidence listed above, such as statistical analysis, benchmarking or cohort studies, because of the amount of bias that can be inadvertently introduced into such studies and the failure of such approaches to prove causation rather than just correlation. (10)

The purpose of a research quality hierarchy is to privilege some sorts of evidence over others and it comes from a medical model of evidence assessment.  The purpose of the hierarchy should be to order and grade evidence rather than to exclude.  However, it remains axiomatic of an evidence-based approach to policy that some evidence is better than others.  It is therefore important to guard against current policy processes being relabelled as evidence-based policy. (11)

One of the key foundations of evidence-based policy is the framing of the policy questions.  Attempting to understand what works is a different issue than understanding why an intervention works or what is wrong.  The "why" question may itself answer important and interesting questions and may lead to new "what works" questions but often causation is extremely difficult to attribute, particularly when the subject is a complex set of social policy interventions. (12)  Similarly "what's wrong?" questions that describe current policy failings are important research questions that can then lead to new "what works to fix this problem?" questions.  However, simply describing current problems is not, of itself, an answer to those problems.

Restricting the assessment of policy proposals to what works is crucial in achieving the overriding objective of policy interventions -- to help rather than harm.  The best current knowledge may turn out to be wrong, such as when Dr Spock advised a generation of mothers to sleep their babies on their stomachs, (13) but consistently basing policy interventions on evidence is a better shield against harmful policy than any alternative.

The answer to "what works" also depends on what question is asked.  For example, Caitlin Hughes finds that the Illicit Drug Diversion Initiative was based on evidence mustered to answer the question, "How can drug use and drug related crime be reduced?" rather than the question "How can harm to drug users from drug taking be reduced?" (14)  The clarity required to formulate precise policy questions is one of the great strengths of evidence-based policy.


To put it simply, randomised trials are experiments where participants are allocated to groups based on the toss of a coin.  Some trials involve one group acting as a control -- no intervention -- while the other receives an intervention.  In other trials all groups receive some sort of intervention, the purpose is to test which is the better alternative.

The reason many researcher regard this type of study as better than others is it has the least amount of known bias.  The lack of bias means results from randomised experiments can be replicated and therefore can be applied to other groups with a higher likelihood of the same results occurring. (15)


Adapted from Cook's Why have educational evaluators chosen not to do randomised experiments? (16)

Because randomised experiments are resisted by some parts of academia and bureaucracy it is worthwhile spending some time detailing the various objections raised against randomised experiments.  Sometimes the underlying reason for the objection is actually the lack of skills to design and undertake such studies.  When appropriate and comprehensive training is included in the introduction of randomised experiments in policymaking a large proportion of objections disappear.

However, there are also a range of objections motivated by an ideological attachment to theory-based policymaking. (17)  Any attempt to introduce evidence-based policy with randomised experiments will face these objections. (18)  In the UK, fierce debates have occurred in relation to the introduction of evidence-based policy, most obviously in education (19) and social work. (20)  Similarly, the most vocal opponents in Australia are from the education field, with one opponent claiming new managerialism and evidence-based practice "can be compared to 're-education' in communist China". (21)


Arguments of this type seek to demonstrate either that experiments cannot provide unbiased tests of causal hypotheses or that the theory of causation behind the experiments is too simple.  A typical argument against the trials labels experimentation as positivism and then shoots down that already discredited straw man.  Although this may appear a very esoteric argument, the regularity with which it appears in some form requires it to be addressed.

Positivism is the belief that entities do not exist independently of their measurement.  Positivism privileges prediction over explanation and looks for theories expressed in mathematical form.  This practice has long been discredited. (22)  However, precisely because positivism has been discredited, but looks suspiciously similar to the methods and theories used in randomised experiments, i.e. quantification and causation, critics have labelled the advocacy of randomised experiments positivism. (23)  Behind the philosophical language used by these critics is a deep scepticism of science and the scientific method. (24)  At their core is an attempt to categorise science as value-laden. (25)

However, even if nothing is theory-neutral, and even if there are no facts outside our perception, it remains true that some results reoccur no matter what the researcher's own bias desires.  These results, because they are replicable, must therefore count as "facts".  At the very least, for practical policymakers, such facts must be more attractive than any amount of untested theory.

The other criticism in this strand says that randomised experiments simplify the world too much. (26)  Elegant randomised experiments do only test a very limited, (perhaps only one) set of potential causes.  Randomised experiments are designed to answer the question:  "if this changes then what is the effect?"  But, many other theories of cause work the other way;  they ask:  "Given this effect what are the causes".  The causes may be multiple and never identified by experimentation unless the right question is asked.  Advocates this second approach believe that the interaction of multiple causes is such that experimental results cannot be generalised because the multiple causal factors mean no single intervention fully explains an outcome.

Experiments bundle the causes to look at the impact of only altering one (or a few) factors.  Advocates of experiments look to expand knowledge incrementally, confirming and rejecting relevant causes through an iterative process.  Experimentation is not a theory of causation, but that does not make it useless.  Some causes are irrelevant to public policy.  By focussing research resources on testable propositions that matter, researchers achieve the greatest positive public policy impact.

Moreover, more complex theories of causation -- those that seek to incorporate multiple variables into a single explanation -- tend to rely on "given truths", which are not evidentially based, in constructing the overarching explanation.  Furthermore, the biases in those given truths are rarely examined even though those who advocate experimentation frequently charge others of with this fault.


This group of arguments against randomised experiments is particularly prevalent in education research;  however, variations on this argument appear in all of the social sciences.  According to its opponents, it is impossible to mount randomised experiments in schools because of teacher and parent opposition.  Similarly, it is said randomised experiments cannot be conducted in criminology because of community opposition.  In general, this set of arguments refer to the feasibility of conducting randomised, controlled trials. (27)

In these days of informed consent, it is argued every parent would need to give informed consent in order to allow their children to participate in a trial.  Teachers would also have to give their consent.  The fact this would not be readily forthcoming would either make holding the experiment impossible or would bias the results in unacceptable ways due to the non-random nature of those who declined.  However when Oakley et al. (28) reviewed the process undertaken for three randomised trials in the UK, it was found that the objection to randomisation did not come from potential participants.

Cook reviewed the existence of experimentation in schools.  He found research on pedagogic topics to be very rare, but experimentation in schools widespread when it dealt with preventing negative behaviours or improving health. (29)  He concluded that randomised experimentation is common in health sciences, with funding agencies support, following clinical trials tradition and existing government practices.  By contrast, none of those norms exist in education.  Petrosino similarly reviewed six types of childhood interventions (education, juvenile justice, child protection, mental health, health care, and general social programs) and found randomised studies were used for nearly 70 % of interventions in health care but only in 6 % in education and juvenile justice. (30)

Experimentation outside health care is rare simply because it is not the accepted way:  funding bodies do not support it, and policymakers do not privilege it.  All of these practicality problems are surmountable.  For example, an experiment, on class size could be mandated via the education department and schools could be randomly assigned centrally.  Or, a promise could be made to principals that if they ended up in the control group, i.e. no change, and the intervention worked then they would be first to be offered it at the end of the study.  Lastly, schools could be paid for their participation in the study (separate to any additional resource issues involved in the study).

A subset of this group of objections can be summarised by the sentiment:  "why bother with experimentation when better or simpler methods can replace them?"  The preferred alternative tends to depend on the policy area:  education favours case studies, (31) criminology matched samples, etc. (32)  As may be seen from the discussion of what randomised trials are, the argument that alternative types of research are better is false.  This is not to say non-experimental research is without value;  any properly conducted, systematic enquiry is an improvement on faith-based policymaking.

The threat randomised trials bring to existing researchers unskilled in quantitative methods cannot be understated. (33)  Additionally, many non-science researchers' deep suspicion of the scientific method is a major factor in less robust, and therefore more biased, methods of research continuing. (34)  While policymakers continue to accept consensus rather than evidence, the norm of groupthink will ensure continued barriers to randomised trials.

Furthermore, experimentation occurs already.  Policy interventions based on theory rather than experimentation is a form of experimentation without consent. (35)

To illustrate, it is worth quoting at length from Chalmers' summary of the failure of three widely used programs to withstand critical evaluation:

In her address at the opening of the Nordic Campbell Center, Merete Konnerup, the director, gave three examples showing how the road to hell can be paved with the best of intentions.  An analysis of more than fifty studies suggests that effective reading instruction requires phonics and that promotion of the whole-language approach by educational theorists during the 1970s and 1980s seems likely to have compromised children's learning.  A review of controlled assessments of driver education programs in schools suggests that these programs may increase road deaths involving teenagers:  they prompt young people to start driving at an earlier age but provide no evidence that they affect crash rates.  A review of controlled studies of "scared straight" programs for teenage delinquents shows that, far from reducing offending, they actually increase it. (36)

The phonics versus whole language debate still rages in Victoria, as do regular calls for teenage driver education programs.  These policy "experiments" continue despite the evidence that they lack efficacy.


The crux of the moral arguments is that it is wrong to deny a potentially useful treatment to worthy individuals.  The corollary is that it is wrong to subject people to a potentially damaging intervention.

Because governments deal with limited resources, it is inevitably true that "governments never provide assistance to all those who would benefit from it". (37)  In the case of a trial a researcher cannot know the result, or at least not with sufficient certainty, or the experiment would be redundant.  There can be no logical claim that an intervention is positive before it has been tested. (38)  One of the great attributes of trials is they are limited and therefore much cheaper in their implementation than a system-wide intervention that may not work.

In relation to the second objection, that some people might be subject to poor policy, two responses can address this concern.  Firstly, these are not clinical trials of untested drugs, capable of great medical harm.  Secondly, it is always possible to provide financial compensation to make sure nobody suffers as a result of participation. (39)



All policymaking occurs within the political sphere.  While evidence-based policy can provide compelling reasons to support a particular policy course of action over another, it is not sufficient to override deep-seated community or partisan hostility to some ideas. (40)  Some potentially beneficial research questions cannot even be trialled due to political opposition.  An example often cited is the proposed ACT heroin trial. (41)  Briefly, in 1997 a randomised trial was proposed to provide 40 heroin users with medically prescribed heroin.  Despite initial approval, the Federal Government subsequently overturned the plan in the face of enormous hostility from sections of the community.

Even when robust evidence exists to support policy action, it is not always sufficient to incite political action, particularly when the evidence runs counter to long-held community beliefs or is confronting for powerful vested interests.  Political imperatives will always create tensions, particularly when research demonstrates that an existing policy is not working.  The expedient response will often be to criticise the research.  When the research comes from an external body such as a university or think tank, the temptation will always be to dismiss unpleasant truths.

The degree to which policymakers dismiss research will depend largely on the culture within government.  In the UK the adoption of evidence-based policy was driven by then Prime Minister Tony Blair. (42)  His personal involvement, and the allocation of resources to its implementation, removed major groups of objectors although not all.

Additionally, political cycles and research cycles rarely align.  The political process often requires immediate answers to problems, especially if adverse media commentary is creating a political as well as policy problem.  Evidence-based policy is not well-suited to crisis management.

Recognition of political constraints is not a reason to discard evidence-based policy.  Policymakers can choose the extent to which they follow evidence.  Chalmers perhaps best expresses the caveat of which policymakers should always be explicitly aware:  "the lives of other people will often be affected by the validity of their judgements". (43)  Having robust evidence is a strong bulwark against both poor decisions and future political problems.


Conducting research can be expensive.  Properly run, large, experimental trials that run for several years cost millions of dollars and may conclude that the studied intervention does not work.  Rigorous statistical analysis requires good data on which to conduct the research, this is too often unavailable to researchers outside government.

In some policy areas there is a lack of suitably qualified researchers, especially if randomised trials are not part of the discipline's heritage.  Training people is expensive and time consuming.  This adds to the length of time before there are any results.

A clear operational framework covering issues of privacy, consent, authorisation, and funding is required for social science research.  A common problem identified in the UK is the lack of consistency across agencies for data release. (44)  Similarly, the imposition of additional costs on local governments and agencies in collecting or preparing (e.g. making records anonymous) is often a factor in denial of access to data.


Evidence-based policy is not a widespread feature of policymaking processes in Australia. (45)  Despite this, Australian academics and policy researchers have been active in debating evidence-based policy, particularly in opposing its introduction. (46)

Outside medical clinical practice and drug trials, which are not included in the scope of this paper, there have been only sporadic attempts to implement evidence-based policy.  This lack of adoption has been noted by researchers in areas as diverse as housing, (47) rural doctor shortages, (48) welfare (49) and aboriginal disadvantage. (50)

Evidence-based programs have largely been driven by committed bureaucrats and academic government advisors in specific areas.  For example, the recently retired NSW Board of Studies President, Professor Gordon Stanley, is credited with insisting on evidence-based practice in the development of the NSW Higher School Certificate.

No Australian government has embraced evidence-based policy in a whole-of-government way or even for an entire department.  Moreover, where evidence-based policy has been adopted in name, the actual practice has often been unrecognisable to that of UK or US usage. (51)

This is not to suggest that Australian Federal and State governments do not use research and evidence, as it is clear this is not the case.  However, using research in an ad hoc or selective basis is not the same as evidence-based policy.  Evidence-based policy is the process of applying evidence to the formulation and assessment of policy interventions.  This is not widely practised in Australia.


Government can move towards a far more robust approach to policymaking in stages.  Victorian policymaking has benefited from the introduction of the Standard Cost Model (SCM) within a Regulatory Impact Statement (RIS) framework.  However, the SCM measures add administrative burden in reporting to government from regulation not total compliance burden.  As a result, the major regulatory impact of many regulations is not included in the SCM.  Similarly, the Business Impact Assessment (BIA) process is required when the relevant Minister determines a proposal has potentially significant effects for business.  However, the Premier can exempt any proposal from a BIA.

The staff of the VCEC is highly skilled in effective policy processes and the organisation is well-equipped to enforce a more prescriptive way of policymaking.  In increasing order of complexity, the following measures need to be considered in order to move Victorian policymaking towards a rigorous evidence-based system.

  1. Require the BIA or RIS process to assess explicitly why non-regulatory (or quasi-regulatory) measures are not preferred.
  2. Require rigorous benefit-cost analysis at the industry and firm size level based on quantitative analysis with all potential impacts (economic, social and environmental) assigned dollar values.
  3. Codify inputs to benefit cost analysis, e.g. discount rate, long-term growth rates, value of a statistical life, etc.
  4. Benchmark Victorian policy outcomes against national and international comparisons
  5. Expand the scope, availability and frequency of data collected across all portfolio programs, encourage university and private research based on high-quality data.
  6. Trial competing policy options in double-blind trials before implementation.
  7. Repeal non-evidence-based policy programs.


High-quality data is central to good research, as is ease of access to that data. (52)  Victoria does not exhibit Australian best practice in data collection and dissemination.  As an example, the Information Victoria website no longer contains any data and offers no pointers as to where to find data electronically.

One of the consequences of poor data at the State level is a lack of academic interest in State Government program outcomes.  For example, we and some building industry organisations have done most of the quantitative work done on Victorian house prices and the impact of government policy on housing prices.  This research has relied on private data due to the lack of useful public data on current house prices in Melbourne, average development costs and other necessary data.

At a bare minimum, Victoria should create a central agency for the dissemination of data required under statutory rules, for example length of hospital waiting lists or public transport reliability.  Just as the ABS releases the CPI or building approvals, Victorian State data needs to be depoliticised.  The Minister responsible for the data should not be the person releasing it.

Another data category is sanctions and prosecutions brought under state and local government legislation.  There is public interest in sanctions for non-compliance of food safety laws, consumer protection generally and labelling.  One method of determining whether current laws are appropriate is to look at prosecution data.  It may be that some laws are no longer necessary.  Public concern may be misplaced and largely driven by media sensationalism of isolated problems.  In general the Magistrate's courts do not collect or publish detailed data on the nature of matters brought before it.  There is no way, for example, to undertake a study on prosecutions of community organisations under the various food safety laws.

Apart from data required under legislation, government collects a vast array of other data.  Some of this data relates to the operations of government (for example number of permits issued or traffic infringements issued) while other data is about people or business conditions.  Often this data is collected on an ad hoc basis and is never released or is only partially in a media release.

Local councils also collect data on their operations, yet this is not made available for comparative research.  Many researchers have noted the limitations of local government data.  Recently the Productivity Commission commented that "available data and measurement limitation make it impossible to access the distributional impacts of revenue raising within councils" (53) and this was only one of the more serious data limitations the commission listed in its report.  Local government would require additional funding to arrange its data in a consistent format and coordinate its release.

Government could make much of this data available.  This paper recognises there are political sensitivities over some data, but the cause of much non-release is because there has been no resource allocation to collect it into a form ready for release.  For example, Rural Finance holds the data on exceptional circumstances interest rate subsidy applications and in the context of the Federal review of drought payments, the release of some aggregate data points could make a substantive contribution to the development of new policy.  Yet this data is not available outside government.

Similarly, the Sentencing Advisory Council has begun to release high-quality data on sentencing for various crimes and prison sentences.  This is a welcome development.  Yet there is no release of the underlying data and there is no scheduled release of an annual update.  Within the proposed model, it would be preferable if a central statistical agency collated and released the underlying data and the Sentencing Advisory Council conducted research on sentencing policy.  This way other researchers would also have access to the data and the Sentencing Advisory Council could concentrate on high-level analysis and reporting.

The task of identifying potentially useful data, funding its collection and dissemination and creating the IT infrastructure to coordinate and centralise this across government is not trivial.  Perhaps the most important part of the task is to identify the data.  One suggestion would be to involve researchers from universities and think tanks in projects to identify world's best practice in data collection and dissemination for each policy area.  For example, town planners and housing policy researchers could scour the world's websites for planning and building data and create a hierarchy of most to least useful data released by other jurisdictions.  Similarly, criminologists could undertake the same process for crime statistics and economists for economic activity data.  Some areas will be more controversial than others.  For example, so-called league tables for school achievement are published by some governments as is public housing length of tenure.  Both would be regarded as controversial in Victoria.  However, rather than the bureaucracy and political offices determining which data is useful, outsiders should be consulted as to what they would find most useful.  This process of useful data identification should be institutionalised.

At the least, publishing respected researchers' conclusions on optimal data would be a positive step towards a more open conversation over government processes.


At Monash Medical Centre, Victoria already hosts the Australasian Cochrane Centre for evidence in medical practice. (54)  The headquarters of ANZSOG are in Carlton.  An internationally recognised precinct for policy analysis and evaluation could be formed if the headquarters of the ABS moved to Melbourne and it collaborated with the VCEC and Productivity Commission.  A further step could be the establishment of an Australasian Campbell Collaboration Centre similar to the SFI Nordic Campbell Center in Denmark. (55)  The SFI Nordic Campbell Center was established under the purview of the Ministry of Social Affairs and exists to conduct social science research and to disseminate the results.

The benefits to Victoria from becoming a world leader in policy development are significant.  In addition to any job creation and economic spin-offs from training people, Victoria will gain from an increased skill level within its own government.  Perhaps most importantly, if the various institutions and Government can forge strong links, the Victorian Government will itself continue to improve towards better practice.


The public service is central to the development of good policy across all policy areas.  However, non-treasury department policy officers tend to have lower levels of financial skills and often come from disciplines that favour non-quantitative research methodologies.  In an environment of cost-benefit analysis, randomised experiments and rigorous evidence-based policy there is a pressing need to increase the analytical and research skills of policy officers to a similar level across all departments.

Currently, the government is commissioning specialist consultancy work on various topics.  This is particularly true of technical economic modelling, highlighting an area where internal skills need to be upgraded.  International experience has also highlighted the importance of increasing the analytical and research skills of policy officers.

The current ANZSOG/SSA partnership has the potential to deliver high-level strategic approaches to driving innovation within the public sector and facilitate knowledge transfer between exceptional academic policy researchers and government.  This partnership could be expanded to deliver detailed training in the specific skills needed to implement the most robust evidence-based policy model.


Just as competition can drive innovation in business, competition by researchers can create new ideas for policymaking and specific policies.  The public service has unequalled access to data and institutional memory and expertise, so it will always be central to policymaking.  However, the use of university, private and non-government sources of research can bring new perspectives to policymaking.  To some degree, this is already occurring with the commissioning of specialist modelling on defined topics, but there is a need to expand the research questions to cover a broader spectrum of both technical modelling and experimental study.

The use of external research in the policy process and the challenges which arise in best utilising such research have been widely studied in the UK (56) and US.  Nutley (57) identifies seven features of effective practices used to increase research impact:

  • Research must be translated.  Adaptation of findings to specific policy and practice contexts
  • Enthusiasm of key individuals -- personal contact is most effective
  • Contextual analysis.  Understanding and targeting specific barriers to and enablers of change
  • Credibility.  Strong evidence from trusted source, including endorsement from opinion leaders
  • Leadership within research impact settings
  • Support.  Ongoing financial, technical and emotional support
  • Integration of new activities with existing systems and activities


Rigorous, transparent and comprehensive adoption of evidence-based policy across the Victorian public sector will bring substantial benefits to the State.  Most obviously, a focus on "what works" to the exclusion of unsubstantiated and discredited policy directs resources in the most beneficial way.  The ability to redirect resources towards useful policies will benefit those in greatest need.  For this reason alone, evidence-based policy should be embraced.  However, the benefits to Victoria are greater than the specific policy interventions that follow from evidence.  How open Victoria is to facing the challenges that will confront it and how committed the State Government is to itself being innovative will have a major impact on the future prosperity of Victoria.  Victoria already has some institutions integral to the overall adoption of evidence-based policy and there are opportunities to enhance this and position Victoria as a global hub for public policy creation and execution.


Managing risk is well understood as a key component of the capacity to innovate in the private sector.  The ability to manage risk is based on a capacity to properly identify the upside and downside of relevant risks and to accurately estimate the costs of favourable and unfavourable outcomes.  The observation that nearly all businesses fail -- they are victims of innovation by their competitors -- is central to the insight of one of the greatest economists, Joseph Schumpeter.  The market economy is not static and neither is its growth.  The creative destruction whereby up and coming firms overtake and obliterate existing ones is a necessary condition for economic progress. (58)

Government has three vital roles to play in enabling risk-taking by entrepreneurs and scientists.  First, it is the role of government to provide the regulatory environment that ensures innovators can benefit financially from risk-taking.  This requires strong property rights enshrined in law and adhered to in practice.  Second, it has a large role to play in both educating the public about managing risk and providing leadership in striking an appropriate balance between risk and progress.  Third, within government, rigorous science-based risk assessment of environmental, health and other appropriate programs, both existing and proposed is required.  The aim of such reviews should not be to "gold plate" regulation in an attempt to eliminate risk;  instead, the goal should be the lightest touch regulation possible:  regulation that only affects those doing the wrong thing.

It is also worth noting that government efforts to promote innovation through direct program expenditure have been shown to not be successful and this lack of success has persisted for many years. (59)  More recently, research has shown government funding of biotechnology is unable to differentiate between firms likely to be successful and those likely to fail, suggesting the capacity of government to identify future successes remains limited. (60)


There are many ways in which government's actions can assist or harm the private sector's capacity to innovate.  The entire regulatory burden -- including taxation, compliance costs and direct regulatory barriers -- plays a major part in determining the competitiveness of all sectors of the economy.  This paper takes as a given that taxation levels and compliance costs must be reduced, and that the Australian average taxation of business is too soft a target in this increasingly interlaced world.  Instead, government must continually look to achieve globally light levels of regulation and taxation if Victoria is to prosper.

There has been a growing tendency in government to define the regulatory burden on enterprises as administrative compliance costs.  Filling out forms for government is certainly an area of cost for business and continuing efforts are needed to alleviate this burden.  However, the far more important and heavy burden is from the regulations themselves.  Examples include occupational health and safety, environmental standards, labelling standards and licensing.  The Victorian Government has correctly identified the broader costs from poor regulation, but as it admits itself, "no Australian government has successfully implemented a systematic policy to measure the costs of new and existing regulation, and comprehensively reduce those costs.  This is the first step in any process to identify areas for reform". (61)

However, beyond taxation and compliance costs, there are two key forms of regulation that can assist or hinder entrepreneurs.  These are property rights and regulatory favouritism.

Maintaining strong property rights requires constant vigilance.  It is too simple for governments to restrict the rights of both real and intellectual property right holders through incremental regulation.  Innovators must be comfortable that they will be able to profit from their risk-taking.  For Victoria with its strong biotechnology and research industries this is particularly important.

The regulatory environment should be firm and industry-neutral, that is it should not afford privileges or erect barriers for specific firms and industries.  The economic growth for Victoria will be maximised when resources flow to their best use and this can only occur in an environment that allows new enterprises to compete against existing operations.  Historically government has not supported this principle and has repeatedly intervened in favour of existing industries at the expense of emerging ones.  Examples include previously high tariff barriers, "orderly marketing" for agriculture, shop trading hours restrictions, liquor licensing restrictions, and registration for hairdressers and other service providers.  Over the past 25 years of reform, many of these restrictions have been removed.

The anti-competitive restrictions that remain are strongly defended by the vested interests they serve.  For example in pharmacies, regulation continues to act against consumers interests by effectively banning supermarkets and discount stores from employing pharmacists:  this is strongly supported by the Pharmacy Guild.  In this context, it is unsurprising to see The Pharmacy Guild of Australia donate the substantial sum of $266,353 to political parties in the last financial year.

Further reform is necessary.  The National Reform Agenda (NRA) process is instrumental in driving competitive reform and Victoria's leadership is commendable.  However, Victoria must go further than national benchmarks as Victoria does not enjoy the natural advantages of West Australia or Queensland or the global city status of Sydney.  Successive waves of reform have made Melbourne and Victoria an attractive place to live;  however, only 15 years ago Victoria was losing population, had high unemployment and sluggish business activity.  Similarly, eight years ago Sydney was a Mecca yet is now increasingly unattractive.  The rapidity with which situations change requires that Victoria be aggressive in competitive reform across all industries and institutionalise processes that guard against any return to "picking winners" or protection of existing industry.

Another way in which regulation can stymie innovation is by mandating detailed specification standards rather than desired outcomes.  For example, it may be that a new way of reducing air pollution results in lower emissions of harmful pollutants, but that the new process does not meet specification standards designed for the existing process. (62)  While this insight has been widely recognised in public policy design (and was a central plank of the first round of national competition reforms) it is almost as if its lessons have been forgotten in the increasingly prescriptive regulatory frameworks legislated in areas such as environmental star ratings.  A new program, driven by an agency that itself is not creating regulation, to refashion this type of regulation to become industry-neutral is another mechanism the Victorian Government can adopt to enable private sector innovation.


Governments have important roles to play in the capacity of societies to innovate.  The core responsibility of government is to ensure its own approaches to policy formation are congruent with creating an environment that enables private sector innovation and entrepreneurship.  Government can do this in a number of ways and this paper argues the most important action it can take is to reform its own policy processes towards a more robust, research-driven, approach.  This includes the adoption of evidence-based policy and highly detailed and prescriptive benefit-cost modelling.

Moreover, government must continue to resist temptations to attempt to engineer private sector innovation through program expenditure and should instead direct its efforts towards the regulatory environment.


1.  I Chalmers, "Trying to do more good than harm in policy and practice:  The role of rigorous, transparent, up-to-date evaluations", Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 22-40.

2.  L Sherman, "Preface:  Misleading evidence and evidence-led policy:  Making social science more experimental", Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 6-19.

3.  SFI - Nordic Campbell Center, "Systematic research reviews", 2008, viewed 10 January 2008.

4Australasian Cochrane Centre, 2008, viewed 10 January 2008.

5.  British Labour Party, New Labour Because Britain Deserves Better (Election Manifesto), London, 1997.

6.  S Nutley, H Davies & I Walter, "Evidence-based policy and practice:  Cross-sector lessons from the United Kingdom".  Social Policy Journal of New Zealand, vol. (20), 2003, pp. 29-48.

7.  T Milewa and C Barry, "Health policy and the politics of evidence", Social Policy and Administration, vol. 39(5), 2005, pp. 498-512;  S Nutley, "Facing the challenge of delivering evidence-based policy programmes", Paper presented at the Delivering crime prevention:  Making the evidence work Conference, 2005.

8.  G Leicester, "The seven enemies of evidence-based policy", Public Money & Management, 1999.

9.  Sherman, loc. cit.

10.  D Green,& A Gerber "The underprovision of experiments in political science", Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 94-112.

11.  S Glazerman, D Levy & D Myers "Non-experimental versus experimental estimates of earnings impacts", Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 63-93.

12.  T Cook, "Why have educational evaluators chosen not to do randomised experiments?"  Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 114-149.

13.  I Chalmers, loc. cit.

14.  C Hughes, "Evidence-based policy or policy-based evidence?  The role of evidence in the development and implementation of the Illicit Drug Diversion Initiative", Drug and Alcohol Review, vol. 26(4), 2007, pp. 363-368.

15.  D Green & A Gerber, loc. cit.

16.  T Cook, loc. cit.

17.  I Sanderson, "Making sense of 'What Works':  Evidence-based policymaking as instrumental rationality?", Public Policy and Administration, vol. 17(3),2002, pp. 61-75.

18.  T Cook, op. cit.

19.  W Humes & T Bryce, "Scholarship, research and the evidential basis of policy development in education", British Journal of Educational Studies, vol. 49(3), 2001, pp. 329-352.

20.  D Farrington, "British randomized experiments on crime and justice", Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 150-167.

21.  B Davies, "Death to critique and dissent?  The policies and practices of new managerialism and of 'evidence-based practice' ", Gender and Education, vol. 15(1), 2003, p. 95.

22.  T Cook, op. cit.

23.  E St. Pierre, "Scientifically based research in education:  Epistemology and ethics", Adult Education Quarterly, vol. 56(4), 2006, pp. 239-266.

24.  M Feuer, "Response to Bettie St.Pierre's 'Scientifically based research in education:  epistemology and ethics' ", Adult Education Quarterly, vol. 56(4), 2006, pp. 267-272.

25.  M Hammersley, "On 'systematic' reviews of research literatures:  A 'narrative' response to Evans & Benefield", British Educational Research Journal, vol. 27(5), 2001, pp. 543-554.

26.  A Oakley, V Strange, T Toroyan, M Wiggins, I Roberts & J Stephenson, "Using random allocation to evaluate social interventions:  Three recent UK examples", Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 170-189.

27.  Oakley et al., loc. cit.

28.  ibid.

29.  T Cook, loc. cit.

30.  A Petrosino, "Estimates of randomized controlled trials across six areas of childhood intervention:  A bibliometric analysis", Annals of the American Academy of Political and Social Science, vol. 589, 2003, pp. 190-202.

31.  T Cook, loc. cit.

32.  D Farrington, loc. cit.

33.  L Rosenstock & LJ Lee, "Attacks on science:  The risks to evidence-based policy", Ethics and Public Health, vol. 92(1), 2002, pp. 14-18.

34.  M Feuer, loc. cit.

35.  I Chalmers, loc. cit.

36.  I Chalmers, op. cit., p. 23.

37.  A Leigh, "Randomised policy trials", Agenda, vol. 10(4), 2003, p.343.

38.  D Farrington, loc. cit.

39.  A Leigh, (2007, 4 October).  "First, Find out What Works", Australian Financial Review, 4 October 2007.

40.  G Leicester, loc. cit.

41.  A Wodak, "Public health and politics:  The demise of the ACT heroin trial", Medical Journal of Australia, vol. 167, 1997, pp. 348-349.

42.  UK Cabinet Office, Adding it up:  Improving analysis and modelling in central government (Performance and Innovation Unit Report), London, 2000.

43.  I Chalmers, op cit., p. 22.

44.  E Munro, L Holmes, & H Ward, "Researching vulnerable groups:  Ethical issues and the effective conduct of research in local authorities", British Journal of Social Work, vol. 35, 2005, pp. 1023-1038.

45.  A Leigh, "Randomised policy trials", Agenda, vol. 10(4), 2003, pp. 341-354;  D Weatherburn, Ten arguments against evidence-based crime prevention policy:  An assessment of their validity.  Paper presented at the Delivering Crime Prevention:  Making the Evidence Work, 2005.

46.  B Davies, loc. cit.;  G Marston, & R Watts, "Tampering with the evidence:  A critical appraisal of evidence-based policy-making", The Drawing Board, vol. 3(3), 2003;  C McDonald, "Forward via the past?  Evidence-based practice as strategy in social work" The Drawing Board, vol. 3(3), 2003, pp. 123-142.

47.  L O'Dwyer, "A critical review of evidence-based policy making (Final Report No. 58):  AHURI", 2004.

48.  D Wilkinson, "Bonded training places:  Evidence-based policy or a stab in the dark?", Australian Journal of Rural Health, vol. 11, 2003, pp. 213-214.

49.  J Banks, R Disney, A Duncan, & J van Reenen, "The internationalisation of public welfare policy", Economic Journal, vol. 115, 2005, pp. C62-C81.

50.  A Leigh, 2007, loc. cit.

51.  P Burton, "Modernising the policy process", Policy Studies, vol. 27(3), 2006, pp. 179-195;  WG Carson, Evidence-based policy and practice, n.p., 2002.

52.  R Warburton & W Warburton, "Canada needs better data for evidence-based policy:  Inconsistencies between administrative and survey data on welfare dependence and education", Canadian Public Policy, vol. 30(3), 2004, pp. 241-255.

53.  Productivity Commission, Assessing Local Government Revenue Raising Capacity (Draft Research Report), Canberra, 2007, p. xxxix.

54.  See

55.  See

56.  J Evans & P Benefield "Systematic reviews of educational research:  Does the medical model fit?", British Educational Research Journal, vol. 27(5), 2001, pp. 527-541;  M Hammersley, "On 'systematic' reviews of research literatures:  A 'narrative' response to Evans & Benefield", British Educational Research Journal, vol. 27(5), 2001, pp. 543-554.

57.  S Nutley, 2005, loc. cit.

58.  M Bianchi & M Henrekson, "Is neoclassical economics still entrepreneurless?" KYKLOS, vol. 58(3), 2005, pp. 353-377.

59.  The Economist, 26 June 1982.

60.  A Fier & O Heneric, "Public R&D policy:  The right turn of the wrong screw?  The case of the German biotechnology industry", Unpublished Working Paper.  Centre for European Economic Research, 2005.

61.  S Bracks, A third wave of national reform:  A new national reform initiative for COAG, Melbourne:  Victorian Premier's Department, 2005, p.26.

62.  N Ashford, C Ayers, & R Stone, "Using regulation to change the market for innovation", Harvard Environmental Law Review, vol. 9(2), 1985, pp. 419-466.

No comments: