Sub-Theme 2 | The Self-Assessment of Budgetary Programs
The SABP was designed to evaluate the annual performance of one-third of all expenditure programs launched by the central government. Eventually, the performance of all government expenditure programs was to be assessed over a three-year cycle. MOSF provided a checklist for evaluation for all central ministries. Based on the checklist, all central government ministries conducted standardized assessments of the performance of the expenditure programs for which they were responsible. MOSF double-checked the self-assessment results submitted by the government agencies and determined the final annual performance evaluation results. MOSF also added recommendations on how to improve the performance of the expenditure programs of the central ministries. MOSF used the final performance results in budgeting by reexamining and adjusting the size and priority of expenditure programs according to the performance results. MOSF intended to restructure the existing expenditure programs and even terminate some expenditure programs based on their performance results. In sum, the primary objective of SABP was to make central government agencies accountable for the performance of their programs ##3D_LAYER##(Park, 2012).##3D_TEXT:Performance management system of budgetary programs in Korea##3D_LINK:https://www.kdevelopedia.org/Resources/economy/performance-management-system-budgetary-programs-korea--04201306130126667.do##3D_LAYER_END##
Major Steps of SABP
The procedures of SABP are summarized as follows. First, all central government agencies choose one-third of their expenditure programs to be evaluated by SABP. They consult MOSF for the selections. MOSF provides a guideline for the selection of target programs, reflecting “duplication or overlapping with other programs, improvement of delivery system or resource allocation process, modification of performance objectives or indicators, establishment of monitoring system on ongoing programs, program evaluation and performance analysis, and enhancement of program achievement” (Park, 2012).
Second, after selecting their target expenditure programs, the evaluation divisions of the central government agencies assess the performance of their target programs according to the guidelines provided by MOSF.
Third, the evaluation division of each government agency submits its preliminary self-assessment reports to MOSF for review. MOSF forms a committee of external experts to determine if the agencies’ self-assessment reports are consistent with the guideline that it distributed to all government agencies. The Center for Performance Evaluation and Management of KIPF and NISA participate in the committee with MOSF (Park, 2012).
Fourth, after reviewing the preliminary self-assessment reports, the committee transmits them to the Advisory Board of Evaluation of Budgetary Programs for another review. The board comprises several experts in performance evaluation from universities, research institutes, and private entities (Park, 2012).
Fifth, consulting on the comments and opinions of the Advisory Board of Evaluation of Budgetary Programs, MOSF revises the preliminary self-assessment reports of all the government agencies. After soliciting appeals from the government agencies evaluated by SABP, MOSF finalizes the SABP reports and makes them public (Park, 2012).
Structure of SABP
MOSF frequently updated the evaluation questions in the guidelines for self-assessment of the central government agencies. They provided 15 questions in 2005, 13 questions in 2007, 11 questions in 2008, and 13 questions in 2011. MOSF divided all government expenditure programs into two sectors of general administration and information technology. The programs in general administration comprised “SOC investment, equipment and facility, direct service provision, equity investment, loan, grants to private sector, and grants to local governments.” There were also two subcategories of “information system” and “supporting system for information” in the information technology sector. As of 2011, for example, the guideline suggested 12 evaluation questions commonly applied to all programs and two questions relevant only to programs in the information technology sector (Park, 2012).
The questions evaluated government expenditure programs in the areas of planning, management, and performance and feedback. In scoring, they applied different weights across the sections: 20 points, 30 points, and 50 points, respectively. The planning section examined two subsections: “adequacy of program plan (three questions)” and “adequacy of performance plan” (two questions). The management section evaluated the extent of “adequacy of program management” (three common questions and two specific questions relevant to information technology). The performance and feedback section checked the level of “accomplishment of performance objectives and feedback of evaluation results” with three questions. The guideline provided not only 13 evaluation questions but also detailed explanations of how to answer the listed questions and assign scores to each evaluation question.##3D_LAYER##[Table 2]##3D_TEXT:Evaluation questions of SABP in 2011##3D_LINK:https://www.kdevelopedia.org/Resources/government-law/evaluation-questions-sabp-2011--01201805140149992.do##3D_LAYER_END## summarizes the performance evaluation questions of SABP in 2011.
The evaluated divisions of the government agencies added the scores of all evaluation questions to get the total score of each expenditure program, whose maximum was 100. Any expenditure programs with a total score of 90 or higher were categorized as “very good,” ones with a score of 80-90 were “good,” ones with a total score of 70-80 were “fair,” ones with a score of 60-70 were “unsatisfactory,” and programs with a score of 60 and less were categorized as “very unsatisfactory” (Park, 2012).
[Table 3] shows the performance evaluation criteria for the “planning” section in SABP 2011.
##3D_LAYER##[Table 4]##3D_TEXT:Performance evaluation criteria of SABP in 2011 : Management##3D_LINK:https://www.kdevelopedia.org/Resources/government-law/performance-evaluation-criterisabp-2011--01201805140149994.do##3D_LAYER_END##illustrates the performance evaluation criteria for the “management” section in SABP 2011.
##3D_LAYER##[Table 5]##3D_TEXT:Performance evaluation criteria of SABP in 2011 : Performance and feedback##3D_LINK:https://www.kdevelopedia.org/Resources/government-law/performance-evaluation-criterisabp-2011--01201805140149996.do##3D_LAYER_END##summarizes the performance evaluation criteria for the “performance and feedback” section in SABP 2011.
MOSF announced how to use the performance evaluation information of expenditure programs for budgeting and resource allocation across the expenditure programs. MOSF noted that any expenditure programs in the categories of “unsatisfactory” or “very unsatisfactory” should be penalized by cutting up to 10% of their previous year’s budget, although the cut was not automatic but applied after considering other conditions. In contrast, the expenditure programs whose categories were “good” or “very good” became candidates for a budget increase in the following year (Park, 2012).
Practices and Problems of SABP
Several studies found that SABP revealed multiple unexpected problems and outcomes different from the original intents. First, the number of government expenditure programs assessed by SABP has not been even across years: 555 programs in 2005, 577 programs in 2006, 585 programs in 2007, 384 programs in 2008, 346 programs in 2009, 473 programs in 2010, and 389 programs in 2011. There was a slightly decreasing trend in the number of expenditure programs under the SABP assessment. Additionally, the sectoral distributions of expenditure programs evaluated by SABP have not been balanced. Note that MOSF categorized the government expenditure programs as “SOC investment, equipment and facility, direct service provision, equity investment, loans, grants to private sector, and grants to local governments.” In the period of 2005-2011, in numbers, the expenditure programs in the category of “direct service provision” (in total, 985) received the strongest attention from the SABP assessment, followed by “grants to private sector” (957), “grants to local governments” (603), “equity investment” (284), “loans” (269), and “SOC” (173), while the expenditure programs in the category of “equipment and facility” (38) got the least attention from the SABP assessment ##3D_LAYER##(Park and Won, 2012).##3D_TEXT:Performance management system of budgetary programs in Korea##3D_LINK:https://www.kdevelopedia.org/Resources/economy/재정사업-성과분석과-정책적-시사점-policy-agendfor-budgetary-programs-koregovernment--05201408200133739.do##3D_LAYER_END##
Second, it seems that SABP did not enhance significantly the overall performance of government expenditure programs in the period of 2005- 2011, which conflicted with the expectation of MOSF. The average scores of the SABP assessment were 60.1 in 2005, 59.9 in 2006, 66.0 in 2007, 66.6 in 2008, 65.9 in 2009, 62.2 in 2010, and 61.9 in 2011. The average scores across the three subcategories (planning, management, and performance and feedback) have also fluctuated without any clear improvements (Park and Won, 2012).
Third, the scores of the SABP seemed to be inflated. The central government agencies showed a tendency to evaluate the majority of their programs as “fair,” neither too positive nor too negative. The portions of expenditure programs that were evaluated as “fair” and above were 84.3% in 2005, 88.7% in 2006, 94.7% in 2007, 73.2% in 2008, 79.5% in 2009, 75.5% in 2010, and 69.6% in 2011. Without clear reasons, the agencies became rather conservative in self-assessing their expenditure programs after 2008. Park and Won (2012) conjectured that MOSF evaluated that the central agencies were too generous in assessing their expenditure programs in the period of 2005-2007 and ordered them to be stricter in their self-assessment after 2008.
Some studies revealed the actual and potential problems of the SABP system. First, most central government agencies were incapable of producing essential performance information due to their limited technical capability. Regarding this, note that a typical human resource management practice used to place most public employees on diverse posts and rotate them across distinct tasks, which hindered most government officials from deepening their skills and enhancing their expertise in performance evaluation. Second, most central agencies struggled to identify a handful of concrete measures useful in evaluating their outcome. Third, since most government officials in Korea were not accustomed to performance evaluation systems including SABP, they were reluctant to conform to the newly introduced performance evaluation systems that intended to link the performance scores of expenditure programs to budgeting assigned to the programs ##3D_LAYER##(Park, 2012).##3D_TEXT:Performance management system of budgetary programs in Korea##3D_LINK:https://www.kdevelopedia.org/Resources/economy/performance-management-system-budgetary-programs-korea--04201306130126667.do##3D_LAYER_END##
Evaluation on SABP
Some studies tried to evaluate the performance of SABP, although it is too early to make a clear verdict on it. First, the central government agencies favored planning and management rather than performance and feedback in evaluating their own expenditure programs. This tendency is consistent with their typical practices that focused on input-oriented management rather than outcome- and performance-oriented management (Park, 2012).
Second, a positive relationship between the performance scores of the expenditure programs and the budget allocated to the programs was not clear. The main objective of SABP was to use performance information to assign budget across government programs. SABP was expected to provide public employees and government agencies with substantial incentives so that they might be more accountable to the public and perform better. However, scholars evaluated SABP to be unsatisfactory in this area. It is not easy to find a clear association between SABP scores and changes in budget size across years. This implies that a stronger linkage between SABP and resource allocation is required to enhance the conformity of central government agencies to SABP and encourage government officials to seek higher performance ##3D_LAYER##(Park, 2009).##3D_TEXT:Current Issues and Policy Task in Self-Assessment of Budgetary Programs##3D_LINK:https://www.kdevelopedia.org/Resources/economy/재정사업자율평가-현황과-정책과제--05201805080149946.do##3D_LAYER_END##
Third, the preliminary self-assessment scores by the government agencies were not consistent with the final scores modified by MOSF. Park (2012) found that government agencies tended to overestimate the performance of their expenditure programs. In contrast, MOSF behaved more conservatively in evaluating the performance of government agencies. It was found that the gaps between the preliminary SABP scores and final scores confirmed by MOSF were 25.6 in 2005, 26.8 in 2006, and 24.6 in 2007. More interesting, the majority of the evaluation gaps resulted from the discrepancies of the section of performance rather than the categories of planning and management ##3D_LAYER##(Lee, 2010).##3D_TEXT:Analysis on the reliability of self-assessment of budgetary programs. Hankyung University; Korea##3D_LINK:https://www.kdevelopedia.org/Resources/government-law/재정사업-성과평가의-신뢰성-분석--05201805170149998.do##3D_LAYER_END##
Fourth, the National Assembly of Korea did not pay serious attention to the performance evaluation scores collected from SABP when it appropriated budget. The FY 2008 National Assembly assigned larger budget than requested to two expenditure programs whose SABP scores were “unsatisfactory” in 2007. In contrast, the FY 2008 National Assembly allocated lower budget than requested to 11 expenditure programs whose scores were “very good” in 2007 (Park and Park, 2008). The practice of the National Assembly failing to link SABP scores to budget allocation undermined substantially the incentives that might have encouraged the government agencies to conform to the SABP system (Park, 2012).
The Self-Assessment of Budgetary Programs
|Subject||Government and Law < Public Administration|