What does Ohio's middle class look like?

Back in 2014, NPR reported a table of the top jobs by income category. It was a super insightful snapshot that helped contextualize what sorts of occupations people have across the income spectrum.

This has been an important reference for a lot of our work at Scioto, and we thought it would be interesting to update it for Ohio. So we went to the data, and made our own list of the top 10 jobs by income category for Ohio. 

This table was made using data from the American Community Survey which is collected by the Census Bureau annually. We used the five-year estimate from 2021 for this, meaning the data represented here are actually from as far back as 2017.

One interesting takeaway is how jobs that require education beyond a high school diploma are distributed across the chart. For example, the most common jobs in the six lowest income brackets don’t typically require advanced degrees. Additionally, the only category that has only jobs that typically require an additional degree is the top bracket.

Another interesting takeaway from this chart is how incomes for individual jobs are spread across multiple brackets. Because we are looking at the 10 most frequent jobs in each income group, it makes sense that we see the same jobs pop up multiple times. These are all the most common jobs, and with so many people working in these professions, there is bound to be differences in wages. 

People who are laborers and freight workers stand out as having one of the most variable incomes. These jobs are very common among a range of people, from those whose income would likely mean they fall under the federal poverty line all the way to people who would probably be categorized as upper-middle class. One potential cause of this might be the role of labor unions in these industries. 

One final takeaway that stood out to me is that the only income brackets that have jobs that don’t appear somewhere else are the very top and very bottom brackets. For the majority of the most common jobs in Ohio, There is a lot of variability in the potential wages. The exception is if you are at the very top or very bottom of the income distribution. 

I think this is important because for most jobs, there appears to be some potential for upward mobility. While this is not universally true and depends a lot on context, it is possible for people to stay in the same industry and earn more with more experience. This may still involve changing employers, but it seems like major career changes are not the only path to higher earnings.

The exception is if people are at the very bottom or the very top of the distribution. These people have much less ability to move around without making a big career change. For the top of the distribution, this isn’t too bad. Chief executives aren’t often looking for a new vocation.

For the bottom of the income distribution, this presents a pretty significant problem. Namely that it is much harder to develop human capital when you are spending all of your time and energy just trying to get by. From a policy perspective, job training for people in these lines of work could be an avenue for reducing poverty.

Take a look for yourself at the table and see if anything interesting jumps out to you.

Will AI destroy the middle class?

With the arrival of ChatGPT, economic prognosticators across the country are talking about what AI could do for the economy.

A recent poll by the University of Chicago’s IGM forum (the inspiration behind Scioto Analysis’s Ohio Economic Experts Panel) found only 2% of panelists disagreeing with the statement that artificial intelligence will have a big impact on incomes over the next few decades.

One large impact of artificial intelligence on the U.S. economy could be to transform middle-class jobs. Artificial intelligence could make jobs easier or even eliminate the need for certain jobs by fulfilling the key functions of these jobs. Many of these jobs are those currently in the middle class.

For our purposes here, I will define “middle class jobs” as jobs among the most common in the 20th to 80th income percentiles. I picked these from a great NPR analysis from 2014 that presents American Community Survey data on the most common job categories per income percentile.

Primary School Teachers

Primary school teacher is arguably the most common middle- and upper-middle-class job, making up the most common job category in the 50th to 60th and 60th to 70th income percentiles and coming just behind managers in the 70th to 80th income percentiles.

A 2020 study by McKinsey estimated that the average K-12 teacher spends 22 hours a week on preparation, evaluation and feedback, and administration, 10.5 of which could be reallocated through AI and new technologies. Student instruction and engagement, coaching and advisement, and behavioral-, social-, and emotional-skill development, takes an average of 24.5 hours, only 2 of which can be reallocated. This suggests that AI could free up time teachers spend on preparation, evaluation, and administration in favor of more face time with students.

Seeing as this study also found that 70% of teachers identify lack of time as a primary barrier to personalizing learning, this could also help teachers tailor learning more toward students.

Managers

Managers are the top employment category for the 70th to 80th percentile of incomes and right behind primary school teachers for the 60th to 70th percentile.

Managers are often the people who will have to make decisions about when artificial intelligence is employed or not. According to a Wharton School blog, AI could be relevant to managers who spend a lot of time on administrative tasks, screening resumes of job candidates, customer service, marketing, and merchandising.

Truck Drivers

Truck drivers might be the jobs that are most threatened by artificial intelligence technology. Trucking is a top 3 profession in every income decile from the 20th to the 70th percentile in the United States.

Autonomous trucking could functionally eliminate the need for millions of trucking jobs across the United States. Goldman Sachs projects that in 20 years, the United States will lose 300,000 trucking jobs a year to automation. Given that the average trucker in the United States is 47 years old, it is probably not a good time for young people to get involved in the trucking industry.

Secretaries

Secretaries are the cornerstone lower-middle-class employment category in the United States. They are the most common employment for jobs at the 30th to 40th and 40th to 50th percentiles of income and are only behind nursing aides for the 20th to 30th percentile of income.

While a lot of the duties of secretaries (scheduling, managing records) are being automated, there are other roles that could be harder to automate. As long as managers and professionals need someone with a human touch to assist them, secretaries will be in demand.

What I take away from this list is that the idea that AI will take away middle class jobs is a bit simplistic. While trucking is almost sure to suffer as an area of employment, I don’t see primary school teachers, managers, or secretaries being quickly automated away. If anything, their jobs will get easier and free them up from tedium in order to carry out more mission-critical work. I also wonder which jobs could arise from AI: will AI open opportunities for people to do things they weren’t able to do before?

It seems that the best way to be realistic, looking at the data available, may be to be optimistic.

What is "human capital?"

Recently, I’ve been working on updating the Genuine Progress Indicator indicators for Ohio, and one of the new categories I’ve been working on has been services from human capital. In the context of GPI, this is basically how much value things like green jobs and education generate in terms of increases in long-term earnings. 

More generally though, human capital refers to the knowledge and skills that helps an individual be productive. In practice this means that if two people are performing the same job, the person with more human capital would be more efficient because she would know how to do it better.

In many ways, human capital is a lot like the economic concept of utility. It’s not something that we can ever actually measure, there doesn’t exist a well defined human capital statistic, but we know as a society what sorts of things contribute to it. 

Breaking it down even further, human capital can be separated into two main categories: general and specific. As the names suggest, general human capital refers to knowledge and skills that are universally applicable. Things like basic levels of education and communication skills. Specific human capital is the specialized training that individuals have. This might include higher education or mastering a trade.

The reason it’s important to understand all of this is because policymakers are always trying to find ways to invest in the development of human capital. One of the easiest ways to grow a local economy is to invest in training those individuals. 

From an equity perspective, this is especially important for marginalized communities who have historically had much less access to pathways to human capital development like higher education. There is a positive generational feedback loop where individuals who have more human capital are likely to earn higher wages, which leads to their children having greater access to human capital development. 

Another important consideration when investing in human capital is retaining the individuals receiving the training. A common issue in areas with relatively low human capital is brain drain, which is when people who grow their human capital leave to go live somewhere where they can presumably earn more. To counter this, policymakers can offer incentives to highly skilled workers who choose to stay in their communities.

Another issue relevant to human capital is how it interacts with the criminal justice system. People who end up in prison have an especially hard time finding jobs after their sentence. This struggle makes it more likely that in order to survive, they may need to turn to more illegal activity.

This problem has deep roots that no one program can fix, but increasing job training programs in prisons is a step in the right direction. If inmates have the opportunity to increase their human capital during their incarceration, their odds of integrating back into the workforce upon release can be increased.

Improving human capital will not fix all the problems in society, but it can give individuals more opportunity. People with greater human capital have greater career flexibility, more opportunities, and can contribute more to their local economies. Policymakers interested in economic growth should always be looking for ways to invest in people and retain their skills.

Five myths about carbon pricing

There is a new working paper out this month titled Five Myths About Carbon Pricing by Gilbert Metcalf, professor of economics at Tufts. The goal of the paper is to explain some of the common misunderstandings non-economists might have about carbon pricing.

Usually it is our job as analysts to translate the work of academics into a more accessible format for policymakers, so it is refreshing to see work like this. The paper is excellent and well worth a read for someone interested in better understanding carbon pricing. 

Still, the paper proves its points by examining theories and formulas, so it is not necessarily an easy read. So if you want to understand carbon prices more but don’t have the background to take in lots of math, let's go over the five myths Metcalf talks about, trying to focus on the real world implications. 

Myth 1: Carbon pricing will hurt economic growth

Policymaking is all about making tradeoffs, and of course there must be some economic tradeoff to carbon pricing. This tradeoff is the justification the Trump administration used in 2017 when they backed out of the Paris agreement. 

Although some tradeoff does exist, we need to ask how big the potential economic loss is. Fortunately, some parts of the world have begun to implement carbon taxes, allowing researchers to compare how these areas perform against their peers. 

Using methods such as differences-in-differences and panel regressions, researchers have found that the economic downsides of carbon pricing are likely very small, if they exist at all. With all new research, there should be healthy skepticism about how these results can be applied going forward. Still, the fact that their carbon pricing has not dramatically harmed the economies of any of the places that have implemented it makes it extremely unlikely that a carbon tax would cause much harm.

Myth 2: Carbon pricing is a job killer

If you accept the notion that carbon pricing does not harm economic growth, it should not be too surprising that it does not have a major impact on total employment either. In fact, some studies have found that there are slight increases in employment after a carbon tax. 

The employment effect that is more important is the fact that there is a significant shifting between sectors. Carbon intensive jobs are being replaced by non-carbon intensive jobs. 

While it is encouraging that carbon prices can be the catalyst for this shift without a net-loss in employment, Metcalf acknowledges that there hasn’t been any research done into the transitional costs of this shift. Even with transitional costs, it is good news for carbon taxes that new green jobs have the potential to replace old carbon intensive jobs.

Myth 3: Carbon taxes and cap and trade programs are equivalent

For those who are new to this topic, a carbon tax reduces pollution by making it more expensive while a cap and trade program defines the maximum amount of pollution and lets the market set a price by allowing producers to trade their allowances of pollution. From an economic theory perspective, these two instruments are two sides of the same coin.

However, the operation of these two systems in practice leads to many important differences. Metcalf talks about how carbon taxes might be preferred because the infrastructure to collect taxes already exists and taxes are easier to plan for than market fluctuations.

The most important advantage of carbon taxes is how they interact with other pollution reduction policies. Under a cap and trade program, new pollution reduction policies are unlikely to reduce the total pollution. This is because it allows the industries where the policy takes effect to simply sell their excess pollution allotment to other sectors. 

Myth 4: Carbon taxes are incompatible with emission reduction targets

One major concern with carbon taxes is that they never actually require polluters to reduce their emissions. It just makes them pay more, and if polluters have the resources then they can just keep paying. 

While this point is true, taxes can be adjusted to meet emission reduction standards. Metcalf proposes a tax schedule that is tied to emission reduction targets. This would make it clear to everyone when taxes would change and by how much, so firms could easily plan in advance. 

Another point Metcalf makes is that emission reduction targets are often flawed. This is because greenhouse gas emissions are a stock pollutant rather than a flow pollutant, meaning they stick around for a long time. 

For example, if the goal is to reduce emissions by 50% by 2050, then this could be accomplished by halving emissions in the first year and staying at that level or by slowly reducing emissions every year until 2050. The former would result in much lower total emissions than the latter. 

Metcalf’s proposed tax schedule would be tied to cumulative emissions to counteract this. If emissions are too high in one year, that would lead to taxes being higher for a much longer period of time until the cumulative emissions were back in line with targets. 

Myth 5: Carbon pricing is regressive

Regressive taxes are taxes that take a larger percentage of someone’s income the lower their income is. Think a flat tax of $100, that would be 1% of someone’s income who makes $10,000 but only 0.01% of a millionaire’s income. 

We might expect carbon taxes to be regressive because they are a tax on energy consumption,  and generally speaking low income households spend a larger portion of their income on energy usage. We call this part of the equation a “user side impact,” since it impacts the user of the taxed good. 

What Metcalf goes into detail about is the “source side impacts” of a carbon tax, or how this policy could affect wages and transfer incomes. 

The main point of this section is that despite the fact that this tax is on its surface regressive, it is still a tax which increases public revenue, which then often gets passed through to lower income individuals. The revenue generated by a carbon tax could be used to fund anti-poverty programs. Through good policymaking, we can offset the downsides of a carbon tax. 

New research: U.S. poverty associated with 180,000 deaths in 2019

If you are born in the Buckeye-Woodhill neighborhood on the east side of Cleveland, your life expectancy will be 65 years. Meanwhile, if you are born in Shaker Heights, less than two miles away, your life expectancy is 89 years.

What’s the difference between these two neighborhoods? Among other things, poverty.

A 2019 report from the Center for Community Solutions details the relationship between poverty and life expectancy in Ohio neighborhoods, finding a strong negative relationship between poverty rates and life expectancy at birth.

While we have information on how poverty interacts with life expectancy, we don’t have a great estimate of how many people die every year because of poverty. A new study out this week by an international team of policy researchers and sociologists tries to estimate this number.

In “Novel Estimates of Mortality Associated With Poverty in the US,” researchers David Brady from the University of California, Riverside’s School of Public Policy, Ulrich Kohler from the University of Potsdam in Germany, and Hui Zheng from Ohio State University estimate the impact of poverty on mortality by looking at a cohort dataset of income and comparing it to a similar dataset on mortality.

By combining these two datasets, the researchers were able to estimate not only how many people were dying because of poverty, but how quickly they were dying. The chart below shows how quickly people die at different ages due to being in poverty. As the line goes down, it shows the percentage of the cohort still alive at different ages as expressed on the horizontal axis. So for instance, at age 60, about 90% of people in poverty are still alive.

A detectable trend starts in the 40s, with people in poverty dying quicker than those not in poverty. The gap is wide between the two groups over the next few decades, with 10% of people in poverty dead by age 60, a figure not matched by those not in poverty until they are nearly 75. Death rates for the two groups don’t converge until both groups are nearly 90, at which point about half the population of both people in poverty and not in poverty have died.

The figure below compares how many deaths poverty is associated with in the United States compared to other major causes of death. Notably, according to this estimate poverty ranks as the fourth-highest cause of death in the U.S., only behind heart disease, cancer, and smoking and similar to causes of death like dementia and obesity that kill hundreds of thousands of Americans a year.

Notably, poverty also kills many more Americans per year than headline-grabbing causes of death like drug overdose, suicide, firearms, and homicide.

The researchers found that someone in poverty is anywhere from 26-60% more likely to die in a given year than someone not living in poverty. Someone living in chronic poverty over the past ten years had anywhere from a 45-102% higher chance of dying.

These findings have big implications for public policy. The United States has consistently had a higher poverty rate and shorter lifespans than a number of other similar countries–the link between the two phenomena may help explain this trend. Similarly, this may help explain why racial minority groups have higher rates of poverty and lower life expectancy than non-Hispanic whites in the U.S.

Lastly, the authors offer that cost-benefit analysis of anti-poverty programs should incorporate mortality impacts into the benefits of programs that alleviate poverty. This seems like a natural use of this research. If pulling people out of poverty has health impacts, especially on the scale of mortality reduction, those benefits should be monetized along with other important benefits of the policy. 

This is another example of how anti-poverty programs can rise beyond the equality-efficiency tradeoff. If a program that reduces poverty also has health impacts, that is a win-win for society on the dimension of these two social goals. And that is an insight that needs to be a part of our analysis.

Is it time for a $15 minimum wage?

When I was living in Nebraska in 2014, the state passed a citizen-initiated minimum wage increase to raise the wage from $7.25 to $9 an hour.

At the time, Nebraska’s minimum wage was the highest in the country after adjusting for local cost of living. Nebraska was on the front end of a series of citizen ballot initiatives passed to expand minimum wages in states across the country, many passing by wide margins.

I was surprised when I moved back to Ohio in 2017 that there was not any active movement to increase the state minimum wage. Ohio is a state with a stronger labor history and presence than Nebraska, so I expected there would be a movement to increase the state minimum wage.

Here we are six years later, and ballot language has finally been approved for a vote on a new minimum wage for Ohio. The new proposal would raise the state minimum wage to $12.75 in 2024 and $15 in 2025 then index it to inflation after that.

Since the current minimum wage is also indexed to inflation, the 2025 minimum wage under current law will probably end up in the $11 an hour range. This means that the hourly minimum wage would be set four dollars higher under the proposal.

Minimum wages have had an interesting history among economists. They are a classic example of a price floor, where prices for labor are not allowed under a certain value. Neoclassical economic theory suggests this should lead to a shortfall in jobs since some companies willing to pay less than the minimum wage will not be able to hire workers willing to work for less than the minimum wage.

Over the past couple of decades, though, many economists have been questioning whether minimum wage increases will necessarily lead to employment decreases.

One situation where minimum wage increases will not lead to unemployment are in competitive labor markets where wages are high. If workers can get jobs basically where they want to and this is driving nearly all wages above the minimum wage rate, then there are very few workers willing to work for lower than the minimum wage.

This could be the case in a place like the Columbus Metropolitan Area, where unemployment is at 3.4% and wages are relatively high.

A problem with this situation is that it also means the minimum wage will not have much of an impact. If few people make below the minimum wage and thus are not likely to lose their jobs, few people are also eligible for higher wages because of the increase.

Another situation is in places where markets are not competitive, particularly monopsonistic labor markets. Monopsony is the opposite of a monopoly: instead of there being one producer of a good, there is only one consumer of a good, in this case labor.

If employers (consumers of labor) have too much market power, they can keep wages artificially low, leading to an inefficient labor market. A minimum wage in this scenario can push wages nearer the level they would be in a competitive labor market.

If Wal-Mart is the only employer in town, they can keep prices for labor lower than they would be in a competitive labor market. They also could raise the price over the minimum wage threshold in order to corner the labor market. These sorts of dynamics could be at work in some of Ohio’s more rural and small-town communities.

While a $15 minimum wage would have been unthinkable in Ohio 20 years ago, it seems pretty pedestrian from a policy standpoint now. Yes, there will be some places where wages will go up, but we’ve seen this policy implemented elsewhere and have not seen mass localized unemployment, suggesting other forces may be at play here.

This commentary originally appeared in the Ohio Capital Journal.

Economists say flat tax proposal will deepen inequality

In a survey released this morning by Scioto Analysis, 18 of 22 economists agreed the flat state income tax of 2.75% proposed by lawmakers would deepen income inequality across the state. 

Curtis Reynolds of Kent State wrote “cutting taxes will certainly not improve inequality, since much of the benefits will be felt by higher income individuals.  On top of that, required cuts to services to balance the budget may disproportionately hurt lower income households.”

David Brasington of the University of Cincinnati who was uncertain about the inequality impacts of the flat tax commented “it depends on local government response, how they change income and property taxes in response.”

Additionally, the majority of economists (12 of 22) think that a flat income tax would not help grow the state economy. Eight more were uncertain about the impacts this would have on the overall economy, and only two believed this would help grow the economy.

“Public services and goods are an important part of the necessary infrastructure to grow an economy. Cutting state income taxes will reduce the public infrastructure. Our current tax rate is very competitive with other states and doesn't need to be reduced,” says Rachel Wilson of Wittenberg University.

More quotes and full survey responses can be found here.

The Ohio Economic Experts Panel is a panel of over 40 Ohio Economists from over 30 Ohio higher educational institutions conducted by Scioto Analysis. The goal of the Ohio Economic Experts Panel is to promote better policy outcomes by providing policymakers, policy influencers, and the public with the informed opinions of Ohio’s leading economists.

Unpacking the biggest change ever in cost-benefit analysis

Earlier this month, the Office of Information and Regulatory Affairs (OIRA) released their first ever proposed revisions to Circular A-4, the document that outlines exactly how cost-benefit analysis is supposed to be conducted at the federal level. Because this defines how cost-benefit analysis needs to be done for federal agencies, any changes to this document will have massive policy implications going forward. 

This document is still open to public comment, so none of these changes are official yet. Academics, professionals, and other stakeholders can still give their thoughts and change some of this official guidance. For now though, let’s take a look at the proposal. What changed, what stayed the same, and what the policy implications might be. 

Analytic Baseline

When we make projections as part of CBA, we often compare the potential future under a particular policy alternative compared to the current day status quo. This requires the assumption that if we do not go down this policy path, the world around us will stay exactly the same.

If you think this sounds like an unreasonable assumption, then congratulations because OIRA agrees with you. 

Going forward, the proposed guidance will be to establish an analytic baseline. In other words, if we are forecasting what will happen with a policy proposal, we need to compare it to a status quo forecast.

This might strengthen the case for preventative policies such as carbon taxes or green energy subsidies where we will expect the status quo situation to get worse over time. Another side effect of this change is that CBA is going to become more analytically intensive going forward. 

The researchers doing these analyses are going to be asked to make more assumptions about the world around them and justify those assumptions empirically. With policies like this, the question is whether the added complexity of the model adds enough useful information to be worth the additional time and uncertainty introduced by the complexity. 

Distributional Analysis 

In CBA, distributional analysis is the process of exploring how the benefits and costs of a policy are distributed across a society. The question distributional analysis is trying to answer is as follows: who is actually going to be paying the costs and who is actually going to be receiving the benefits of the policy in question?

In the current proposed revisions, OIRA has decided to not require agencies to include distributional analysis as part of their work, but instead gives agencies the discretion to choose to include it should they expect significant distributional differences. 

The primary reason behind this decision is that Circular A-4 applies to a wide range of government agencies that all have different goals. Specific guidelines on how to perform distributional analysis may not be appropriate for the range of agencies performing CBA. 

The most important implication of this is that federal CBA is going to continue to largely carry the assumption that costs and benefits are uniformly distributed across the country. For some policies, this might be an appropriate assumption and performing distributional analysis would be a waste of resources. However, policies that specifically target distributional outcomes such as anti-poverty policies should certainly include distributional analysis. 

The onus is on individual agencies to determine whether or not they need to perform distributional analysis. Hopefully they are able to identify when it is appropriate and implement it. 

Discounting

As we’ve talked about before, the question of which discount rate to use still inspires a lot of debate within academic cost-benefit analysis circles. As such, the proposed revisions ask for a lot of comments about the best path moving forward, but for the most part avoid suggesting one specific path is best.

What OIRA did say that was concrete is that interest rates are likely going to continue to be calculated from financial data going forward. This is in contrast to just choosing a discount rate out of thin air, which they claim is ethically problematic. 

One interesting change to discount rates is how OIRA is going to handle discounting for future generations going forward. Under a normal discounting framework, we would expect benefits that accrue for future generations to essentially have no net present value because they get discounted by so much over time.

This raises a lot of ethical concerns about our society's responsibility to future generations that are inherently unable to participate in the current decision making process. As a result, OIRA is proposing to release a table that lists the proper discount rates on a 150 year time horizon, taking into account the fact that we care about the benefits of future generations. 

None of these changes are final yet. These proposed changes will be open for public comment until the first week of May, and it seems like some of these changes might look very different then depending on the input OIRA gets. 

Still, this is the most significant change to federal CBA ever and it will dramatically change the way policy analysis is done. Hopefully these changes improve the quality of policy analysis and in turn, lead to better policy making decisions.

Reading curriculum changes need evaluation

Earlier this month, Education Week reported on a policy trend that Ohio Gov. DeWine has made a central focus of his 2024-2025 budget: reform of reading curriculum standards.

This reform in particular centers around a fulcrum of debate about how to teach reading in schools. In particular, a popular but controversial program called “Reading Recovery” is in the crosshairs of the governor.

Reading Recovery is a program that focuses on one-on-one instruction where a teacher keeps a running list of words the student read incorrectly. The teacher takes notes about what may have tripped the student up on these particular words.

Reading Recovery had a lot of promise out of the gate. A randomized controlled trial of the program in 2010 showed 1st grade participants in Reading Recovery far outpacing their peers in reading skills after five months of instruction.

Subsequent evaluations of the program, however, have cast doubt on its effectiveness. A follow-up evaluation of participants in the program done by the same center that conducted the original evaluation found Reading Recovery participants falling a half grade level below their peers in 3rd and 4th grade reading proficiency tests.

This evaluation as well as others in the field have led researchers to worry that individualized focus helps students in early stages of learning but passes over “foundational” learning. This means that students can learn how to read words that are important for a 1st grader, but these skills do not help students get to the level of 3rd grade reading, and can even be detrimental to that goal.

Some who advocate on behalf of teachers, however, have argued that similar approaches to Reading Recovery like “three-cueing,” an approach to learning that emphasizes context over phonics, should be preserved as an option for teachers.

Education researchers are critical of this sort of approach. Chanda Rhodes Coblentz, an assistant professor of education at the University of Mount Union, called three-cueing “a fancy way of saying we’re allowing kids to guess at words.”

Part of what may appeal to educators about approaches like Reading Recovery is the combination of one-on-one instruction and quick results. In this way, Reading Recovery may be like a keto diet: you get results, you get them fast, but you’re not building the fundamentals needed to make sustainable, long-term progress.

On the other hand, the value of leaving curricular decisions up to teachers is that they can tailor educational experiences to their classroom. Theoretically, Reading Recovery could be a bad program for the average classroom but still a useful program for a subset of classrooms, and teachers could be well-suited for identifying whether it is the right curriculum for their classroom.

If there is an argument for these alternative approaches, we need evidence of their effectiveness. Governor DeWine is seeking $162 million for reading reform efforts, hoping to discourage programs like Reading Recovery and approaches like three-cueing in favor of more evidence-supported curricula.

If defenders of three-cueing are right and these approaches are useful for a subset of students, then let’s test it. The state of Ohio should set aside a small portion of these funds for evaluation of pilots of alternative teaching techniques to see if they work. And these pilots should be evaluated out to the third-grade level if possible to determine if impacts are long-lasting.

Ultimately, we can’t rule out of turn that there is no student for which Reading Recovery or three-cueing will be useful. But if we want to keep these around as options in the face of mounting evidence they are hurting child reading outcomes, we need better evidence of their effectiveness.

This commentary first appeared in the Ohio Capital Journal.

How can we do more equitable policy analysis?

Earlier this week, I attended a webinar on data equity. For an hour, statistician Heather Krause talked about some of her work experiences where her internal biases and assumptions meaningfully changed the results of her analyses and gave some tips for spotting these in future work. 

At Scioto Analysis, we believe that equity should be considered in every policy analysis. The truth is that while equity is always a part of policy adoption, the only thing that changes from an analytic standpoint is whether or not we choose to acknowledge it. 

Consider this example: we have three classrooms, one with three students, one with six students, and one with nine students. What is the average number of students in each class? This is an easy enough calculation, (3+6+9)/3 = 6. As simple as this seems, it actually relies on an important assumption about equity. In this case the variable we are measuring is classroom size

Instead, let's consider things from the students’ perspectives. What is the average class size that a student experiences? In this case, the variable of interest is classroom size for each student. Here, our calculation becomes much larger. If you add up the experiences of all 18 students within these three classrooms, you get (3+3+3+6+6+6+6+6+6+9+9+9+9+9+9+9+9+9)/18 = 7.

Now we have two different conclusions from the same data. Although in this case the results are quite close, we still need to ask ourselves which of these results is more accurate.

This depends entirely on the question we are trying to answer. If our research is about how smaller classroom sizes affect teachers, then saying the average class size is six best reflects how teachers are experiencing classroom size. 

If instead we are trying to measure the effect of class size on students, then the second number better reflects how students are experiencing classroom sizes. 

This example is meant to show that all of our assumptions have equity implications, whether we notice them or not. When I first saw the classroom example, I immediately thought that six was the only correct answer. It did not cross my mind to reframe what variable we were trying to take the average of and how that could possibly influence the equity of the results. 

In this webinar, we also talked about how equity can fit into every part of the analysis process. Is the data being collected in an equitable way? Is the final report being written to discuss the equity implications of your research? Depending on the situation, as analysts we might not be in charge of some of these steps. However, we need to understand how these assumptions influence our results.

The good news is that being careful about including equity in an analysis is almost exactly the same as simply being a good analyst. Identifying assumptions, understanding their implications, and honestly acknowledging them is the core of good analysis. 

In this sense, more equitable analysis is the same as more scientifically rigorous analysis. The difference is that we need to ask more questions about our own internal biases and assumptions as researchers and make sure they are not getting in the way of giving policymakers the answers they need.