By Tim Shea
Special Contributor
It’s a story that makes headlines with depressing frequency. A politician rolls out a shiny new plan to revitalize the community—sometimes it’s in partnership with a private entity, sometimes it’s the government alone—waving an economic impact study or similar analysis to prove how awesome it’s going to be. The project gets approved, the bonds get issued, work begins…and then reality comes crashing down. The project fails to live up to expectations and taxpayers are saddled with millions or even billions in debt and new taxes. Sometimes, taxpayers are left footing the bill long after the project stops delivering.
With such a poor track record, it’s easy to understand why people are cynical about economic impact studies. But as we explained last week, the problem isn’t that economic impact analysis is a fundamentally worthless endeavor; it’s that many of the so-called “expert” practitioners are utilizing seriously flawed methodologies. Even so, how do people who make a living measuring economic impacts habitually miss their mark by a margin of eight, nine, or even ten digits?
To answer this question, one needs to understand the different types of economic impact studies and how they work. Some economic impact projects are retroactive; they look into the economic effects of a project or event after it’s finished. To do this, researches use a combination of scientific surveys and other methods to collect information on spending, employment, participation, etc. This data is then analyzed via industry-accepted multipliers to extrapolate the total economic activity that took place. Such studies are reliable to a reasonable degree of accuracy because—outside of downright cooking the books—it’s hard to manipulate the numbers too far from the reality that was observed.
Other studies, however, are forward looking. They measure the impact of a proposed project over a lifetime of five or ten years, and sometimes even longer. Obviously, it is impossible to know exactly what the future will bring. And the further into the future you go, the more opportunity your model has to break down. Researchers must therefore rely heavily on a base set of assumptions to create these estimates. And that’s where they get into trouble.
Economists will be the first ones to tell you that your study will only be as good as the assumptions that you make. To add to that, even a study with the most accurate assumptions possible will still only provide an estimate. We live in a complex world, and no study can provide an economic impact figure that is 100% accurate.
This is why a conservative approach to economic impact projects is almost always the best route. Knowing that your model is based on assumptions and that you cannot hope to be 100% accurate, taking the conservative route will put you in a more defensible position. Any company or agency that says otherwise should be taken with a grain of salt. In other words, economic impact studies shouldn’t be commissioned with a particular result in mind. They should be designed to present a sober, realistic look at plausible future outcomes. Anything less is not only a bad long-term strategy; it’s also bad ethics.
It is our job as economists to present the findings of economic impact studies in a way that is accessible to the average person, so that everyone can discern for themselves whether the methodology represents a conservative estimate or a rose-tinted fantasy. In assessing the veracity of an economic impact study, though, there’s one simple rule that can usually be followed: if it sounds too good to be true, it probably is.
true, it probably is.
—
If you’re interested in conducting an economic analysis about your organization, take a look at what AE did for the University of Texas Athletic Department:
https://www.angeloueconomics.com/our-work/case-studies/university-of-texas-athletics-department