Wednesday, March 19, 2008

Risk-Based Funding and the Preparedness System

I took a look at this recent GAO report, examining the efficacy of DHS' risk-based funding system:

From fiscal years 2002 through 2007, DHS obligated about $19.6 billion in grants, the purpose of which was to strengthen the capabilities of state, local, and tribal governments and others to prepare for and respond to major disasters of any type or cause.
Almost $20 billion...we should be able to get a lot for that, if we spend it wisely.

In the early days of DHS, they essentially apportioned risk based on population. Lots of people = lots of risk. But that's a crude measuring stick, because it doesn't account well enough for risks in low-population-density areas that might have widespread consequences (e.g., an incident at a nuclear power plant). So...

Since fiscal year 2006, DHS has adopted a more sophisticated risk-based grant allocation approach... For the HSGP allocation process in fiscal year 2007 [and 2008], DHS defined Risk as the product of Threat times Vulnerability and Consequences (R= T* (V & C)).

The Threat Index accounted for 20 percent of the total risk score; Vulnerability and Consequences accounted for 80 percent. For the purposes of the model, DHS considered all areas of the nation equally vulnerable to attack and assigned every state and urban area a vulnerability score of 1.0. Thus, as a practical matter, the final risk score for fiscal years 2007 and 2008 is determined by the threat and consequences scores.

Vulnerability and Consequences...were represented by the following four indices:

• Population Index (40 percent)
• Economic Index (20 percent)
• National Infrastructure Index (15 percent)
• National Security Index (5 percent)
The "Threat x Vulnerability x Consequences" equation is a commonly accepted yardstick, so that's fine. But the idea that every area of the U.S. is equally vulnerable? Umm, that needs some work.

But how to do it? How to measure the vulnerability of a given area or a given target?

Vulnerability, roughly defined, means exposure. But exposure to what? Is it possible to quantify a community's vulnerability in one easily understood dimension? Not really. Different communities are vulnerable in different ways, to different incidents. Another factor is the presence or absence of mitigation practices. For example, Los Angeles and St. Louis are both vulnerable from earthquakes, but Los Angeles is much better prepared to mitigate the effects because earthquake resistance has been part of its building codes for decades. By contrast, St. Louis is full of old buildings that could tumble down in a big one.

Regardless of DHS input, any community should be able to estimate its vulnerabilities. Some are obvious - others may be less so (e.g., the far-reaching effects of a critical infrastructure failure, the cascading effects of an incident elsewhere - such as Houston's experience with New Orleans residents who escaped Hurricane Katrina).

Vulnerability must be built into the system, because it's simply not true that everywhere is equally vulnerable.

And it's also true that we still do not have a reliable nationwide system of measuring our preparedness:
DHS has taken steps to establish goals, gather information, and measure progress, yet its monitoring of grant expenditures does not provide a means to measure the achievement of desired program outcomes to strengthen the nation’s homeland security capabilities. We still know little about how states have used federal funds to build their capabilities or reduce risks.

DHS’s monitoring of homeland security grant expenditures does not provide a means to measure the achievement of desired program outcomes to strengthen the nation’s homeland security capabilities.
In short, in spite of the fact that our metrics are better, we're still not thinking of preparedness as a system. We don't have a real picture of where the risks are, or whether the steps we're taking are reducing those risks.

To their credit, DHS is trying:
According to FEMA officials, DHS leadership has identified this issue as a high priority, and is trying to come up with a more quantitative approach to accomplish the goal of using this information for the more strategic purpose of monitoring the achievement of program goals.

According to DHS officials, one way DHS is attempting to monitor the development of emergency preparedness capabilities is through the Effectiveness Assessment that began as part of DHS’s fiscal year 2006 HSPG grant guidance. According to program requirements, eligible recipients must provide an “investment justification” with their grant application that links their investments to the initiatives outlined in their state’s Program and Capability Enhancement Plan. DHS officials have said that they cannot yet assess how effective the actual investments from grant funds are in enhancing preparedness and mitigating risk because they do not yet have the metrics to do so and there is insufficient historical information from the grant monitoring process to assess the extent to which states and urban areas are building capabilities.

However, all levels of government are still struggling to define and act on the answers to basic—but hardly simple—questions about emergency preparedness and response:
  • What is important (that is, what are our priorities)?
  • How do we know what is important (e.g., risk assessments, performance standards)?
  • How do we measure, attain, and sustain success?
  • On what basis do we make necessary trade-offs, given finite resources?
DHS has limited information on which to base the answers to these questions.
That last line is vital.

There truly is only so much DHS can - or should - know. In our federal system, we do not want all decision-making to come down from on high. Local and state authorities have to go through the process of making these risk assessments, communicating them to DHS (and to one another) and acting on them.

The picture has to emerge from the local level up, not from the top down. DHS is simply not in the position to manage risks on the local level. Even the information they can supply is insufficient for a thorough risk analysis:
According to DHS officials and HSGP grant assistance documents we reviewed, DHS communicates with its state and local stakeholders by:
  1. providing to each state and urban area the individual threat assessments that DHS is using to calculate the risk analysis model’s Threat Index;
  2. validating the nonpublic, critical infrastructure assets that comprise the risk analysis model’s National Infrastructure Index;
  3. providing midpoint reviews of states’ and urban areas’ draft investment justification proposals that are later reviewed during DHS’s effectiveness assessment process;
  4. providing technical assistance as states and urban areas prepare the documentation for their grant applications; and
  5. convening conferences to solicit stakeholder feedback.
Preparedness is not a series of investments, or targets, or potentially disastrous events. Preparedness is a system.

Local agencies should not view the money that comes in from DHS as being used to purchase equipment, to prepare for an incident, or to increase capabilities. The money should be seen as bolstering the preparedness system.


No comments: