Five myths about carbon pricing

There is a new working paper out this month titled Five Myths About Carbon Pricing by Gilbert Metcalf, professor of economics at Tufts. The goal of the paper is to explain some of the common misunderstandings non-economists might have about carbon pricing.

Usually it is our job as analysts to translate the work of academics into a more accessible format for policymakers, so it is refreshing to see work like this. The paper is excellent and well worth a read for someone interested in better understanding carbon pricing. 

Still, the paper proves its points by examining theories and formulas, so it is not necessarily an easy read. So if you want to understand carbon prices more but don’t have the background to take in lots of math, let's go over the five myths Metcalf talks about, trying to focus on the real world implications. 

Myth 1: Carbon pricing will hurt economic growth

Policymaking is all about making tradeoffs, and of course there must be some economic tradeoff to carbon pricing. This tradeoff is the justification the Trump administration used in 2017 when they backed out of the Paris agreement. 

Although some tradeoff does exist, we need to ask how big the potential economic loss is. Fortunately, some parts of the world have begun to implement carbon taxes, allowing researchers to compare how these areas perform against their peers. 

Using methods such as differences-in-differences and panel regressions, researchers have found that the economic downsides of carbon pricing are likely very small, if they exist at all. With all new research, there should be healthy skepticism about how these results can be applied going forward. Still, the fact that their carbon pricing has not dramatically harmed the economies of any of the places that have implemented it makes it extremely unlikely that a carbon tax would cause much harm.

Myth 2: Carbon pricing is a job killer

If you accept the notion that carbon pricing does not harm economic growth, it should not be too surprising that it does not have a major impact on total employment either. In fact, some studies have found that there are slight increases in employment after a carbon tax. 

The employment effect that is more important is the fact that there is a significant shifting between sectors. Carbon intensive jobs are being replaced by non-carbon intensive jobs. 

While it is encouraging that carbon prices can be the catalyst for this shift without a net-loss in employment, Metcalf acknowledges that there hasn’t been any research done into the transitional costs of this shift. Even with transitional costs, it is good news for carbon taxes that new green jobs have the potential to replace old carbon intensive jobs.

Myth 3: Carbon taxes and cap and trade programs are equivalent

For those who are new to this topic, a carbon tax reduces pollution by making it more expensive while a cap and trade program defines the maximum amount of pollution and lets the market set a price by allowing producers to trade their allowances of pollution. From an economic theory perspective, these two instruments are two sides of the same coin.

However, the operation of these two systems in practice leads to many important differences. Metcalf talks about how carbon taxes might be preferred because the infrastructure to collect taxes already exists and taxes are easier to plan for than market fluctuations.

The most important advantage of carbon taxes is how they interact with other pollution reduction policies. Under a cap and trade program, new pollution reduction policies are unlikely to reduce the total pollution. This is because it allows the industries where the policy takes effect to simply sell their excess pollution allotment to other sectors. 

Myth 4: Carbon taxes are incompatible with emission reduction targets

One major concern with carbon taxes is that they never actually require polluters to reduce their emissions. It just makes them pay more, and if polluters have the resources then they can just keep paying. 

While this point is true, taxes can be adjusted to meet emission reduction standards. Metcalf proposes a tax schedule that is tied to emission reduction targets. This would make it clear to everyone when taxes would change and by how much, so firms could easily plan in advance. 

Another point Metcalf makes is that emission reduction targets are often flawed. This is because greenhouse gas emissions are a stock pollutant rather than a flow pollutant, meaning they stick around for a long time. 

For example, if the goal is to reduce emissions by 50% by 2050, then this could be accomplished by halving emissions in the first year and staying at that level or by slowly reducing emissions every year until 2050. The former would result in much lower total emissions than the latter. 

Metcalf’s proposed tax schedule would be tied to cumulative emissions to counteract this. If emissions are too high in one year, that would lead to taxes being higher for a much longer period of time until the cumulative emissions were back in line with targets. 

Myth 5: Carbon pricing is regressive

Regressive taxes are taxes that take a larger percentage of someone’s income the lower their income is. Think a flat tax of $100, that would be 1% of someone’s income who makes $10,000 but only 0.01% of a millionaire’s income. 

We might expect carbon taxes to be regressive because they are a tax on energy consumption,  and generally speaking low income households spend a larger portion of their income on energy usage. We call this part of the equation a “user side impact,” since it impacts the user of the taxed good. 

What Metcalf goes into detail about is the “source side impacts” of a carbon tax, or how this policy could affect wages and transfer incomes. 

The main point of this section is that despite the fact that this tax is on its surface regressive, it is still a tax which increases public revenue, which then often gets passed through to lower income individuals. The revenue generated by a carbon tax could be used to fund anti-poverty programs. Through good policymaking, we can offset the downsides of a carbon tax. 

New research: U.S. poverty associated with 180,000 deaths in 2019

If you are born in the Buckeye-Woodhill neighborhood on the east side of Cleveland, your life expectancy will be 65 years. Meanwhile, if you are born in Shaker Heights, less than two miles away, your life expectancy is 89 years.

What’s the difference between these two neighborhoods? Among other things, poverty.

A 2019 report from the Center for Community Solutions details the relationship between poverty and life expectancy in Ohio neighborhoods, finding a strong negative relationship between poverty rates and life expectancy at birth.

While we have information on how poverty interacts with life expectancy, we don’t have a great estimate of how many people die every year because of poverty. A new study out this week by an international team of policy researchers and sociologists tries to estimate this number.

In “Novel Estimates of Mortality Associated With Poverty in the US,” researchers David Brady from the University of California, Riverside’s School of Public Policy, Ulrich Kohler from the University of Potsdam in Germany, and Hui Zheng from Ohio State University estimate the impact of poverty on mortality by looking at a cohort dataset of income and comparing it to a similar dataset on mortality.

By combining these two datasets, the researchers were able to estimate not only how many people were dying because of poverty, but how quickly they were dying. The chart below shows how quickly people die at different ages due to being in poverty. As the line goes down, it shows the percentage of the cohort still alive at different ages as expressed on the horizontal axis. So for instance, at age 60, about 90% of people in poverty are still alive.

A detectable trend starts in the 40s, with people in poverty dying quicker than those not in poverty. The gap is wide between the two groups over the next few decades, with 10% of people in poverty dead by age 60, a figure not matched by those not in poverty until they are nearly 75. Death rates for the two groups don’t converge until both groups are nearly 90, at which point about half the population of both people in poverty and not in poverty have died.

The figure below compares how many deaths poverty is associated with in the United States compared to other major causes of death. Notably, according to this estimate poverty ranks as the fourth-highest cause of death in the U.S., only behind heart disease, cancer, and smoking and similar to causes of death like dementia and obesity that kill hundreds of thousands of Americans a year.

Notably, poverty also kills many more Americans per year than headline-grabbing causes of death like drug overdose, suicide, firearms, and homicide.

The researchers found that someone in poverty is anywhere from 26-60% more likely to die in a given year than someone not living in poverty. Someone living in chronic poverty over the past ten years had anywhere from a 45-102% higher chance of dying.

These findings have big implications for public policy. The United States has consistently had a higher poverty rate and shorter lifespans than a number of other similar countries–the link between the two phenomena may help explain this trend. Similarly, this may help explain why racial minority groups have higher rates of poverty and lower life expectancy than non-Hispanic whites in the U.S.

Lastly, the authors offer that cost-benefit analysis of anti-poverty programs should incorporate mortality impacts into the benefits of programs that alleviate poverty. This seems like a natural use of this research. If pulling people out of poverty has health impacts, especially on the scale of mortality reduction, those benefits should be monetized along with other important benefits of the policy. 

This is another example of how anti-poverty programs can rise beyond the equality-efficiency tradeoff. If a program that reduces poverty also has health impacts, that is a win-win for society on the dimension of these two social goals. And that is an insight that needs to be a part of our analysis.

Is it time for a $15 minimum wage?

When I was living in Nebraska in 2014, the state passed a citizen-initiated minimum wage increase to raise the wage from $7.25 to $9 an hour.

At the time, Nebraska’s minimum wage was the highest in the country after adjusting for local cost of living. Nebraska was on the front end of a series of citizen ballot initiatives passed to expand minimum wages in states across the country, many passing by wide margins.

I was surprised when I moved back to Ohio in 2017 that there was not any active movement to increase the state minimum wage. Ohio is a state with a stronger labor history and presence than Nebraska, so I expected there would be a movement to increase the state minimum wage.

Here we are six years later, and ballot language has finally been approved for a vote on a new minimum wage for Ohio. The new proposal would raise the state minimum wage to $12.75 in 2024 and $15 in 2025 then index it to inflation after that.

Since the current minimum wage is also indexed to inflation, the 2025 minimum wage under current law will probably end up in the $11 an hour range. This means that the hourly minimum wage would be set four dollars higher under the proposal.

Minimum wages have had an interesting history among economists. They are a classic example of a price floor, where prices for labor are not allowed under a certain value. Neoclassical economic theory suggests this should lead to a shortfall in jobs since some companies willing to pay less than the minimum wage will not be able to hire workers willing to work for less than the minimum wage.

Over the past couple of decades, though, many economists have been questioning whether minimum wage increases will necessarily lead to employment decreases.

One situation where minimum wage increases will not lead to unemployment are in competitive labor markets where wages are high. If workers can get jobs basically where they want to and this is driving nearly all wages above the minimum wage rate, then there are very few workers willing to work for lower than the minimum wage.

This could be the case in a place like the Columbus Metropolitan Area, where unemployment is at 3.4% and wages are relatively high.

A problem with this situation is that it also means the minimum wage will not have much of an impact. If few people make below the minimum wage and thus are not likely to lose their jobs, few people are also eligible for higher wages because of the increase.

Another situation is in places where markets are not competitive, particularly monopsonistic labor markets. Monopsony is the opposite of a monopoly: instead of there being one producer of a good, there is only one consumer of a good, in this case labor.

If employers (consumers of labor) have too much market power, they can keep wages artificially low, leading to an inefficient labor market. A minimum wage in this scenario can push wages nearer the level they would be in a competitive labor market.

If Wal-Mart is the only employer in town, they can keep prices for labor lower than they would be in a competitive labor market. They also could raise the price over the minimum wage threshold in order to corner the labor market. These sorts of dynamics could be at work in some of Ohio’s more rural and small-town communities.

While a $15 minimum wage would have been unthinkable in Ohio 20 years ago, it seems pretty pedestrian from a policy standpoint now. Yes, there will be some places where wages will go up, but we’ve seen this policy implemented elsewhere and have not seen mass localized unemployment, suggesting other forces may be at play here.

This commentary originally appeared in the Ohio Capital Journal.

Economists say flat tax proposal will deepen inequality

In a survey released this morning by Scioto Analysis, 18 of 22 economists agreed the flat state income tax of 2.75% proposed by lawmakers would deepen income inequality across the state. 

Curtis Reynolds of Kent State wrote “cutting taxes will certainly not improve inequality, since much of the benefits will be felt by higher income individuals.  On top of that, required cuts to services to balance the budget may disproportionately hurt lower income households.”

David Brasington of the University of Cincinnati who was uncertain about the inequality impacts of the flat tax commented “it depends on local government response, how they change income and property taxes in response.”

Additionally, the majority of economists (12 of 22) think that a flat income tax would not help grow the state economy. Eight more were uncertain about the impacts this would have on the overall economy, and only two believed this would help grow the economy.

“Public services and goods are an important part of the necessary infrastructure to grow an economy. Cutting state income taxes will reduce the public infrastructure. Our current tax rate is very competitive with other states and doesn't need to be reduced,” says Rachel Wilson of Wittenberg University.

More quotes and full survey responses can be found here.

The Ohio Economic Experts Panel is a panel of over 40 Ohio Economists from over 30 Ohio higher educational institutions conducted by Scioto Analysis. The goal of the Ohio Economic Experts Panel is to promote better policy outcomes by providing policymakers, policy influencers, and the public with the informed opinions of Ohio’s leading economists.

Unpacking the biggest change ever in cost-benefit analysis

Earlier this month, the Office of Information and Regulatory Affairs (OIRA) released their first ever proposed revisions to Circular A-4, the document that outlines exactly how cost-benefit analysis is supposed to be conducted at the federal level. Because this defines how cost-benefit analysis needs to be done for federal agencies, any changes to this document will have massive policy implications going forward. 

This document is still open to public comment, so none of these changes are official yet. Academics, professionals, and other stakeholders can still give their thoughts and change some of this official guidance. For now though, let’s take a look at the proposal. What changed, what stayed the same, and what the policy implications might be. 

Analytic Baseline

When we make projections as part of CBA, we often compare the potential future under a particular policy alternative compared to the current day status quo. This requires the assumption that if we do not go down this policy path, the world around us will stay exactly the same.

If you think this sounds like an unreasonable assumption, then congratulations because OIRA agrees with you. 

Going forward, the proposed guidance will be to establish an analytic baseline. In other words, if we are forecasting what will happen with a policy proposal, we need to compare it to a status quo forecast.

This might strengthen the case for preventative policies such as carbon taxes or green energy subsidies where we will expect the status quo situation to get worse over time. Another side effect of this change is that CBA is going to become more analytically intensive going forward. 

The researchers doing these analyses are going to be asked to make more assumptions about the world around them and justify those assumptions empirically. With policies like this, the question is whether the added complexity of the model adds enough useful information to be worth the additional time and uncertainty introduced by the complexity. 

Distributional Analysis 

In CBA, distributional analysis is the process of exploring how the benefits and costs of a policy are distributed across a society. The question distributional analysis is trying to answer is as follows: who is actually going to be paying the costs and who is actually going to be receiving the benefits of the policy in question?

In the current proposed revisions, OIRA has decided to not require agencies to include distributional analysis as part of their work, but instead gives agencies the discretion to choose to include it should they expect significant distributional differences. 

The primary reason behind this decision is that Circular A-4 applies to a wide range of government agencies that all have different goals. Specific guidelines on how to perform distributional analysis may not be appropriate for the range of agencies performing CBA. 

The most important implication of this is that federal CBA is going to continue to largely carry the assumption that costs and benefits are uniformly distributed across the country. For some policies, this might be an appropriate assumption and performing distributional analysis would be a waste of resources. However, policies that specifically target distributional outcomes such as anti-poverty policies should certainly include distributional analysis. 

The onus is on individual agencies to determine whether or not they need to perform distributional analysis. Hopefully they are able to identify when it is appropriate and implement it. 

Discounting

As we’ve talked about before, the question of which discount rate to use still inspires a lot of debate within academic cost-benefit analysis circles. As such, the proposed revisions ask for a lot of comments about the best path moving forward, but for the most part avoid suggesting one specific path is best.

What OIRA did say that was concrete is that interest rates are likely going to continue to be calculated from financial data going forward. This is in contrast to just choosing a discount rate out of thin air, which they claim is ethically problematic. 

One interesting change to discount rates is how OIRA is going to handle discounting for future generations going forward. Under a normal discounting framework, we would expect benefits that accrue for future generations to essentially have no net present value because they get discounted by so much over time.

This raises a lot of ethical concerns about our society's responsibility to future generations that are inherently unable to participate in the current decision making process. As a result, OIRA is proposing to release a table that lists the proper discount rates on a 150 year time horizon, taking into account the fact that we care about the benefits of future generations. 

None of these changes are final yet. These proposed changes will be open for public comment until the first week of May, and it seems like some of these changes might look very different then depending on the input OIRA gets. 

Still, this is the most significant change to federal CBA ever and it will dramatically change the way policy analysis is done. Hopefully these changes improve the quality of policy analysis and in turn, lead to better policy making decisions.

Reading curriculum changes need evaluation

Earlier this month, Education Week reported on a policy trend that Ohio Gov. DeWine has made a central focus of his 2024-2025 budget: reform of reading curriculum standards.

This reform in particular centers around a fulcrum of debate about how to teach reading in schools. In particular, a popular but controversial program called “Reading Recovery” is in the crosshairs of the governor.

Reading Recovery is a program that focuses on one-on-one instruction where a teacher keeps a running list of words the student read incorrectly. The teacher takes notes about what may have tripped the student up on these particular words.

Reading Recovery had a lot of promise out of the gate. A randomized controlled trial of the program in 2010 showed 1st grade participants in Reading Recovery far outpacing their peers in reading skills after five months of instruction.

Subsequent evaluations of the program, however, have cast doubt on its effectiveness. A follow-up evaluation of participants in the program done by the same center that conducted the original evaluation found Reading Recovery participants falling a half grade level below their peers in 3rd and 4th grade reading proficiency tests.

This evaluation as well as others in the field have led researchers to worry that individualized focus helps students in early stages of learning but passes over “foundational” learning. This means that students can learn how to read words that are important for a 1st grader, but these skills do not help students get to the level of 3rd grade reading, and can even be detrimental to that goal.

Some who advocate on behalf of teachers, however, have argued that similar approaches to Reading Recovery like “three-cueing,” an approach to learning that emphasizes context over phonics, should be preserved as an option for teachers.

Education researchers are critical of this sort of approach. Chanda Rhodes Coblentz, an assistant professor of education at the University of Mount Union, called three-cueing “a fancy way of saying we’re allowing kids to guess at words.”

Part of what may appeal to educators about approaches like Reading Recovery is the combination of one-on-one instruction and quick results. In this way, Reading Recovery may be like a keto diet: you get results, you get them fast, but you’re not building the fundamentals needed to make sustainable, long-term progress.

On the other hand, the value of leaving curricular decisions up to teachers is that they can tailor educational experiences to their classroom. Theoretically, Reading Recovery could be a bad program for the average classroom but still a useful program for a subset of classrooms, and teachers could be well-suited for identifying whether it is the right curriculum for their classroom.

If there is an argument for these alternative approaches, we need evidence of their effectiveness. Governor DeWine is seeking $162 million for reading reform efforts, hoping to discourage programs like Reading Recovery and approaches like three-cueing in favor of more evidence-supported curricula.

If defenders of three-cueing are right and these approaches are useful for a subset of students, then let’s test it. The state of Ohio should set aside a small portion of these funds for evaluation of pilots of alternative teaching techniques to see if they work. And these pilots should be evaluated out to the third-grade level if possible to determine if impacts are long-lasting.

Ultimately, we can’t rule out of turn that there is no student for which Reading Recovery or three-cueing will be useful. But if we want to keep these around as options in the face of mounting evidence they are hurting child reading outcomes, we need better evidence of their effectiveness.

This commentary first appeared in the Ohio Capital Journal.

How can we do more equitable policy analysis?

Earlier this week, I attended a webinar on data equity. For an hour, statistician Heather Krause talked about some of her work experiences where her internal biases and assumptions meaningfully changed the results of her analyses and gave some tips for spotting these in future work. 

At Scioto Analysis, we believe that equity should be considered in every policy analysis. The truth is that while equity is always a part of policy adoption, the only thing that changes from an analytic standpoint is whether or not we choose to acknowledge it. 

Consider this example: we have three classrooms, one with three students, one with six students, and one with nine students. What is the average number of students in each class? This is an easy enough calculation, (3+6+9)/3 = 6. As simple as this seems, it actually relies on an important assumption about equity. In this case the variable we are measuring is classroom size

Instead, let's consider things from the students’ perspectives. What is the average class size that a student experiences? In this case, the variable of interest is classroom size for each student. Here, our calculation becomes much larger. If you add up the experiences of all 18 students within these three classrooms, you get (3+3+3+6+6+6+6+6+6+9+9+9+9+9+9+9+9+9)/18 = 7.

Now we have two different conclusions from the same data. Although in this case the results are quite close, we still need to ask ourselves which of these results is more accurate.

This depends entirely on the question we are trying to answer. If our research is about how smaller classroom sizes affect teachers, then saying the average class size is six best reflects how teachers are experiencing classroom size. 

If instead we are trying to measure the effect of class size on students, then the second number better reflects how students are experiencing classroom sizes. 

This example is meant to show that all of our assumptions have equity implications, whether we notice them or not. When I first saw the classroom example, I immediately thought that six was the only correct answer. It did not cross my mind to reframe what variable we were trying to take the average of and how that could possibly influence the equity of the results. 

In this webinar, we also talked about how equity can fit into every part of the analysis process. Is the data being collected in an equitable way? Is the final report being written to discuss the equity implications of your research? Depending on the situation, as analysts we might not be in charge of some of these steps. However, we need to understand how these assumptions influence our results.

The good news is that being careful about including equity in an analysis is almost exactly the same as simply being a good analyst. Identifying assumptions, understanding their implications, and honestly acknowledging them is the core of good analysis. 

In this sense, more equitable analysis is the same as more scientifically rigorous analysis. The difference is that we need to ask more questions about our own internal biases and assumptions as researchers and make sure they are not getting in the way of giving policymakers the answers they need.

Scioto Analysis releases new cost-benefit analysis of 100% tax proposal

This morning, Scioto Analysis released a new analysis of a bill in the Ohio legislature to tax 100% of income of all Ohio residents.

“All in all, we find this bill to have benefits that far exceed the costs,” said Scioto Analysis Principal Rob Moore, “while we know there are sensitive political considerations to passing a bill like this, we hope policymakers will consider the evidence behind the proposal when making the decision to pass this bill.”

Scioto Analysis analyzed the 100% income tax on the dimensions of economic growth, poverty and inequality impact, and impact on health, education, and subjective well-being.

“Yes, our projections suggest that the 100% income tax would reduce the number of dollars in the economy,” said Moore, “but this would free up a lot of time for other pursuits such as sunbathing, catching butterflies, and improvisational comedy. These are all activities that we know have massive benefits for the public from a long line of economic research.”

The latest coverage of the bill is that the proposal is being wrapped into the current budget bill. Members of the Ohio General Assembly are hoping to pass the bill in full before it gets bogged down in public discussion, shooting for a deadline of April 1st.

How to Moneyball state government

In 2017, I read the book Moneyball for the first time and was awestruck. My brother had got it for me as a Christmas present and I could not believe how closely the book dovetailed with the work I was doing as a graduate public policy student at the time.

If you’re not familiar with this book, Moneyball is the story about how the Oakland A’s used data analytics to turn one of the least-resourced baseball programs in the MLB into one of the most competitive on the field. Rather than scouting people based on how tall or fast they were, the A’s used insights from statistics to create algorithms to pick up athletes who were good at getting walks and on base: the fundamentals of advancing runners and winning the game of baseball.

Basically, they found a way to get the best win percentage bang for their salary buck.

As I read this book, I pondered why people hadn’t applied these insights to public policy problems. I knew there was low-hanging policy fruit–policies that are cheap but not sexy that can grow our economy, reduce poverty and inequality, and help people live better lives. Why aren’t they getting attention?

I was happy to find that I wasn’t alone. A group called Results for America publishes their own version of Moneyball going by the straightforward title of Moneyball for Government. The book is a series of essays by officials from both the Bush and Obama administrations about how to make government and its programs more evidence-based.

I was especially drawn to the Afterward, co-written by Obama Administration Office of Management and Budget Director Robert Gordon and former Senior Advisor to President Bush for Welfare Policy Ron Haskins. The chapter is called “A Bipartisan Moneyball Agenda” and includes concrete steps to making the federal government more evidence-based.

We can take some of the suggestions they make and use them to create an agenda for “moneyballing” state government. Below are some suggestions I have for state governments that want to do this.

1. Appoint a Chief Evaluation Officer

If evaluation is going to be a big part of state government, someone needs to be in charge of it, and should be close to the governor or at worst, the governor’s chief budget officer. A chief evaluation officer can provide expert advice to senior executives on how to integrate research into decision making. This can spur appointment of evaluation officers in major agencies as well. By elevating evaluation to the senior level of leadership, it will instill evaluation as an important aspect of how state government policymaking is conducted.

2. Set aside at least 1 percent of each agency’s discretionary funding for evaluation

Agencies should have authority to direct a minimum of 1 percent of their total funds to program evaluation. This authority will help agencies ensure that they do not miss important learning opportunities when they arise. It will also allow agencies to pilot programs, see if they work, and adjust them or eliminate them to free up funding for more promising programs as they arise.

3. Create a comprehensive, easy-to-use database of state program evaluation results available to the public 

Putting all evaluations of state programs online can promote transparency and accountability, inform better decision making, and signal to researchers the importance of using rigorous research and evaluation designs.

4. Institute comprehensive cost-benefit analysis and equity analysis in regulatory and legislative research analyses. 

Regulatory agency review and legislative research offices are the most trusted sources of information for regulators and legislators respectively in crafting policy for the state. Encouraging regulatory and legislative research analysts to quantify and monetize benefits as well as costs of regulation and legislation will give policymakers more information and help them craft policy that is more effective, efficient, and equitable.

Those are just four examples, but if instituted in state governments across the country, they could have a big impact on adoption of policy that works and provides a good return on public investment. As policy chair for the Ohio Program Evaluators’ Group, I am currently working to promote these sorts of initiatives in Ohio’s state government. I hope more people will push for similar reforms in state governments across the country.

What is meta-analysis?

One of the most important limitations of any single research study is that it only truly represents the data that the researcher used. When it comes to extrapolating any results to new data or in a new context, some studies are better than others.

Studies that include techniques like randomized control trials or causal inference methods are better than straightforward observational studies in this regard, but no one study is perfect. 

As evidence of this fact, different researchers often ask the same question but end up finding different answers. This doesn’t mean that everyone is wrong, just that we live in an uncertain world and small changes in the inputs to a research project can have big effects on the outcomes.

But, as anyone who understands statistics knows, taking the average of repeated samples is one of the most effective ways to find the true average value of something. If we consider each individual study of the same question as a sample, then it follows that by averaging the results of all the studies we can more accurately approximate the truth. 

We call this process “meta-analysis.” Meta-analysis is the systematic approach of analyzing the variable results of many studies of the same question. In many ways, meta analysis is very similar to policy analysis. The goal is to synthesize as much information as you can on one topic to find a single answer.

However, because it is a scientific tool, in order for something to truly be considered a meta-analysis it needs to meet certain standards above just comparing the results of similar studies. 

First, meta-analysis requires a complete literature review of the topic of interest. Often researchers performing a meta-analysis will define a search criteria in advance. For example, they might limit themselves to every paper written about the value of recreational fishing in the last 30 years. 

Exactly how to define a good literature search is a topic of open discussion–there is no “industry standard” for what constitutes a good literature review. Some researchers advocate for including every possible study to avoid some selection bias, while others might selectively exclude studies whose methods might have been questionable. Ultimately, the literature review should focus on gathering as much information as possible on the research question at hand.

Once you have a body of research to analyze, the next step is to record some of the key characteristics of each study. Most modern meta-analyses use meta regression techniques to control for key differences between studies. Some examples of variables that get recorded are the year the data was collected, the type of statistical model used, or the nation of origin of the study.

Often it is best practice for multiple people to perform the last step simultaneously. This way, they can make sure that their results are free from an individual's bias. If two researchers read the same paper, and come to different conclusions about what characteristics to record, then they know to go back and take a closer look.

Another important consideration for researchers is publication bias. Publication bias stems from the idea that academics and journal editors have very little incentive to publish papers that don’t find any new interesting results. This is a problem because it is still important for the broader understanding of a subject to test a hypothesis and find out that we were wrong. It just doesn’t make for good reading. 

In the context of meta analysis, publication bias might result in our estimates being biased to be larger and less variable. There are statistical and graphical checks researchers can perform to check for publication bias, but there is no single method to be certain. 

When done correctly, a meta-analysis can synthesize an entire field of research into a much more digestible and applicable format. There is a lot of work that goes into it and there are many pitfalls along the way, but the reward is certainly worth it: we get one step closer to the truth.