How the Millennium Challenge Corporation Grades Its Own Policies

0

SEATTLE — When U.S. foreign aid agencies subject their programs to thorough, consistent and judicious evaluation, they stand a better chance of benefitting present stakeholders and optimizing future aid packages.

The Millennium Challenge Corporation predicates its evaluation policy on this line of logic. A direct product of the twenty-first century’s data intensive culture, the Millennium Challenge Corporation manages a tightened aid portfolio geared specifically towards developmental investment.

Eligibility for Millennium Challenge Corporation aid grants, or “compacts,” is determined with litmus tests for the supportability of a country’s social and economic infrastructure. If received, the compact is apportioned and monitored through means intended to empower the partner country.

What unfolds is a symbiotic exchange of capital between American taxpayers and aid beneficiaries — a direct counter-narrative to the myths that miscast foreign aid as a gateway to dependency.

Be that as it may, foreign assistance so patently built for the long-term must sacrifice the tangibility of short-term achievements; short-term achievements, for better or for worse, are oftentimes used by taxpayers and aid beneficiaries to measure a program’s success.

That’s where the Millennium Challenge Corporation’s foreign aid evaluation policy comes in.

Highly tactical data collection, deployment of trained evaluators and incentivization of compliance with disbursals are just some of the ways the  Millennium Challenge Corporation holds its compacts accountable for results. The agency also keeps itself honest by running an open data evaluation catalog.

According to Congressional Research Services (CRS), 13 of the 48 evaluations the Millennium Challenge Corporation completed by April 2016 are described as impact evaluations — a proportion much higher than those found in the annals of USAID and the State Department.

An impact evaluation, as opposed to a performance evaluation, generally involves the integration of control or baseline data and is therefore considered to paint a more constructive picture of a program’s progress.

Performance evaluations apply directly to a project’s personnel, tracking and analyzing the inputs and outputs of helping hands. While necessary for general housekeeping and tracking resources, abiding by performance evaluations alone increases the likelihood of oversimplifying and misdiagnosing an initiative’s impact.

Because impact evaluations can strain resources and become counterproductive if implemented needlessly, the Millennium Challenge Corporation has come up with hard-line criteria intended to keep evaluators prudent.

Evaluative rigor doesn’t play out without caveats and setbacks.

Even though the evaluation processes of USAID and the State Department have–thanks to the prompting of the Obama Administration’s Quadrennial Diplomacy and Development Reviews (QDDRs)–seen significant improvement in recent years, all three federal aid agencies are beset with similar challenges.

Placing a premium on impact evaluation demands extra time, money and personnel. Contingencies particular to a single place or conflict are bound to arise, interfering with pre-planned evaluation strategies and compromising the uniformity of evaluation standards.

Another challenge that comes with creating actionable foreign aid evaluation policy is its inevitable embroilment in the geopolitics of a given legislative session. A CRS report that explored the U.S. history of foreign aid evaluation from the Foreign Assistance Act of 1961 onward, identified key pivots at which the surrounding conversation would tilt in favor of action, only to become entangled in extant motives.

Such was the fate of USAID’s Center for Development Information and Evaluation (CDIE). Founded in 1983, CDIE did, for many years, up the ante on USAID evaluation policy-making. But retrospect reveals that a contrast between theory and practice persisted.

Strapped for cash and winded from the ever-shifting political landscape of 1990s Washington, CDIE lost its Office of Evaluation in 1994 and saw its evaluations fall in number by more than half before the turn of the century. CDIE has since been replaced by institutions that emerged during the Bush Administrations, and was officially dismantled in 2006.

The discussion on evaluation policy won’t be tabled anytime soon, though.

The recent signing of the Foreign Aid Transparency and Accountability Act into law has added urgency and legislative fuel to the demand for stronger evaluatory mechanisms.

The Act calls for the establishment of coordinating mechanisms and comparable performance metrics that will facilitate cooperation between federal aid agencies and other stakeholding institutions.

A data-driven exercise in a collaborative evaluation will, ideally, lead to more inclusive learning curves that have broader applications not just for USAID, the Millennium Challenge Corporation and the State Department, but for all who have a vested interest in U.S. foreign assistance.

The jury is still out on whether the Millennium Challenge Corporation, given its status as a relative latecomer to the scene, can fully realize the potential of its rigorous foreign aid evaluation policy. The Foreign Aid Transparency and Accountability Act may very well be the catalyst that gives it the impetus it needs.

Josephine Gurch

Photo: Flickr

Share.

About Author

Jo Gurch

Jo writes for The Borgen Project from Lagos, Nigeria. She grew up in Houston, but has never been to the rodeo.

Comments are closed.