While I am not a statistician by profession, statistics have never been far from my works throughout most of my career. I took up my first job in government more than thirty years ago at our national planning agency – Bappenas – to lead the Bureau for Economics and Statistics. As the bureau’s name suggests, my main responsibility was to supply the institution’s needs of data – especially economic data – for the planning process. It did not itself collect data but instead relied on other more formidable data collecting agencies. Conveniently as it turned out, I was also given the responsibility of overseeing the programs and the annual budget of our national statistical office – BPS.
In the subsequent years, as I increasingly took up the decision making responsibility, my role invariably shifted from facilitating the production of statistics to one of a principal user of statistics. So I thought it might be useful to share in this forum how we, the users of statistics and information in general, see their role in policy decision making in government.
Let me start with the ideal information situation in which any policy maker would love to be whenever he/she has to make decision: all the relevant data with unquestionable accuracy are available in real time at his/her finger tip. Alas, that ideal situation is never to be. Even in the best of circumstances the hard reality is that, information wise, policy makers are always ‘behind the curve’.
Why? The main reason why it is so is that a policy maker is always bound by a time table. At a particular juncture he or she has to come up with a decision on what actions to take on the basis of the ‘best’ information available to him or her at that critical time, which most probably are neither complete nor very accurate. Very often to get that ‘best’ information his or her team has to scramble to assemble data from different sources, in and outside the bureaucracy. The assembled information consists of data with differing completeness and quality, a kind of “information salad’ or ‘information soup’. The policy maker has to make the best use of it and make a decision.
To be fair to the statisticians and other data producers, I must add that in reality the problems of policy making do not come only from the ‘supply side’ or the availability and quality of information. Very serious problems could in fact occur on the ‘demand side’ or the way the available data are being used. The ‘cook’, if I could metaphorically call the supporting team who are tasked to process and analyze the assembled information and present actionable options, for weak technical expertise or lack of sound judgment, may not do a good job. The options are then flawed or misleading. Once such options find their way to the decision maker it is hard to expect a quality policy to be the outcome, unless we could assume that the decision maker happens to be a supremely wise and extremely knowledgeable person. A rarity indeed.
Important as they are, I will not dwell further on the ’demand side’ problems. My comments that follow will be largely on the ‘supply side’ ones. Inevitably, my Indonesian experience will influence my story. And I will remain focus on public policy making.
Let me underscore that policy decision making is essentially a multistage input-output process. The quality of the resulting policy is the sum total of the qualities of all those inputs and outputs along the information chain. To improve the quality of the end product – the final policy outcome – therefore one must look into the possibility of improving the quality of the output of each related institution along that chain.
To begin, we should recognize the fact that in formulating policies, national governments rely mostly on information generated within and by its own institutions. The national statistical office usually stands out as its principal source for basic economic and social information. In this country, three other institutions deserve special mention. The central bank is the sole source of monetary statistics, finance ministry for fiscal statistics and the financial services authority for data on banking and other financial institutions. These four institutions are the first-tier information providers for policy making.
Certain other institutions also collect data related to their respective functions but with more limited coverage and generally of lesser quality. They are the second-tier information providers. To name a few: ministry of home affairs for regional finance and some social indicators, ministry of agriculture for agriculture-related statistics, ministry of industry for industrial production statistics, ministry of transports for air, sea and land transport capacities and traffics, ministry of public works for the state of road and irrigation systems.
The quality of the information vary greatly across institutions, notably among the second-tier ones. It shows the differing capacities in their information gathering and processing. But actually it reflects a deeper and more general problem – a lack of appreciation of the critical role of good information in making good decision. In today’s world it is generally accepted that accumulated institutional knowledge and effective information system are the foundation of a “smart” institution (and hence smart policies). It seems though that such a view has not caught on in many government institutions. It is one of the fundamental challenges of a country’s bureaucratic reformers.
The potential of improving information capability in the institutions I mentioned earlier is substantial. There are still enough rooms for raising the operational standards of even the first-tier institutions to the international best practices. And clearly there are plenty of rooms to level up the information capability of those second-tier agencies through redefining information gathering function in each of them, providing sufficient number of qualified personnel and securing adequate budget for them. To be sure, partial efforts have been made along this line. But to make them stick the initiatives must be substantively incorporated in their respective long term reforms agenda. Better still, if they are made to be an integral part of a broader plan for national bureacratic reform. Systematic efforts along this line in my view will give the greatest payoffs for policy making.
Recently I have been trying to follow the lively discussions among statisticians and data scientists on the potential benefits of using privately collected “big data” in improving the operations in both government and business. If we believe that the key to national progress is better public policies and better business conduct, then we must take the issue seriously. For a non specialist like myself, though, it is too complex an issue to jump in. So let me make only some general comments on it.
The first point I wish to make is that not only the private sector but the government could also be a producer of big data. There are many routine government processes at the national and subnational levels that could generate continuous streams of large scale and up-to-date information. If digitized, they could become invaluable big data systems. Raising the standards of the digital technology usage and practices in government agencies would directly improve their ‘traditional’ activities in information gathering and processing while indirectly also raise the probability of success of any planned government’s cooperation schemes with the private sector in utilizing other big data systems. Digitizing government’s administration processes will give even larger payoffs as it helps raise the efficiency and integrity of the day-to-day operations of the bureaucracy.
This is a big, long term job with many challenges. Some of them may spring up at the very beginning. Thus a common problem is that the existing IT systems of government agencies are not compatible one another. Let me relate a story. I once was tasked to improve coordination of the agencies’ IT development plans and I tell you how energy-sapping a job it was. It turned out that each agency had a legacy system not easily reshaped and reoriented. The reason, though, is not so much technological as institutional, namely bureaucratic inertia or resistance toward change. The important lesson from the case study was that getting a firm hold of their IT budgets was the minimum requirement. You need more than that. You must have some reserve energy for breaking many forms of institutional inertia and resistance. One form that we found particularly difficult to deal with has a root in the so-called ‘vendor driven’ planning practices. By the end of its term the task force at best registered only a partial success. Nevertheless I would reiterate that digitizing government processes and developing government-owned ‘big data’ is a truly worthwhile effort and should be redoubled in the future.
There is a big promise from the possibility of utilizing non-government big data which recently have grown exponentially as a result of the ever expanding digitization of ordinary social and economic processes. We are told that currently we are still at the beginning of a long process. If the government could tap these enormous sources of information, the quality of its administrative and policy decisions could be vastly improved with far less costs, and the society stands to gain.
These new sources of information are useful for strengthening and sharpening the ‘traditional’ policies. For instance, they potentially will make obsolete surveys such as those on consumers’ confidence, investors’ confidence and employment situation. Such surveys are essential for calibrating macroeconomic policy stance. Eventually they will be replaced by direct and real-time readings of the relevant big data. There are other instances, such as in health, education, poverty alleviation and transportation where the use of big data offers entirely new policy perspectives and possibilities.
The use of privately collected information by government involves a combination of the use of compulsion and voluntarism. Government can issue regulations compelling the private parties to share their information with the government. But in a democracy and market economy, there are political and economic limits to the application of the coercive power of the state. When the limits are reached we will have to rely on voluntary cooperation agreements between the government and the private parties. Such ‘public-private partnership’ in information sharing is essential but may not be easy to come by, especially in the newly digitized social and economic processes.
For the traditionally highly regulated sectors such as the financial sector, voluntary co-operations mean information sharing arrangements beyond what is mandated by prevailing prudential regulations which themselves are continually evolving. From the regulators’ and policy makers’ points of view, obviously, more, better and more timely data would be very helpful for their routine prudential surveillance job and, even more crucially, for policy makers in managing the fluid situation in times of crisis. But we know that beyond certain points compulsion becomes harmful for the efficient operations of financial institutions and markets, and most probably also for individual customers.
Judicious combination of regulations and co-operations is therefore key to the success of the endeavor. And since the use of big data for supporting policies most likely entails new institutional arrangements, new territories and new modus operandi, experts advise us to start with small scale experimentations, then from there move on to scale them up, only after lessons have been learnt and extracted from the former.
Let me summarize my main points.
· The quality of policy making is determined by the quality of the available information and the way the available information is being used.
· In policy making governments still rely mainly on information generated by their own agencies. A key step to improve the quality of policy making is therefore by systematically raising the information producing capability of the relevant institutions.
· Digitizing routine government processes will improve the quality of policy making while indirectly also gives large benefits with the improvements in the efficiency and integrity of the government bureaucracy.
· The growth of privately collected big data opens up a new possibility of vastly improving public policies with far less costs. The key is how to evolve a judicious combination of regulations and voluntary cooperation schemes. The best way to move forward is to start with small experiments and as lessons gained, move on to scale them up.
* This article is originally a keynote address delivered by the author at the Regional Statistics Conference, 22 March 2017, Bali, Indonesia. It is reposted here with the author’s permission.
** Prof. Boediono is a Professor of Economics at Gadjah Mada University and Former Vice President of Indonesia.