The utility bailout House Bill 6 made both Ohio’s air and politics dirty

With all the drama surrounding the Householder trial for racketeering, it can be easy to forget the bill behind the former Ohio Speaker of the House’s alleged $60 million payoff from First Energy power company.

House Bill 6 had four major impacts. It required power consumers to bail out two massive nuclear power plants in northern Ohio. It also required Ohio ratepayers to bail out two coal plants: one in Ohio, one in Indiana. It reduced energy efficiency standards, requiring Ohio utilities to reduce energy use by 17.5% rather than the previous goal of 22%. Lastly, it reduced Ohio’s renewable portfolio standard, requiring utilities in Ohio to generate just 8.5% of their power from renewables, lowest in the country among states with standards.

I want to focus on the last impact: reducing Ohio’s renewable portfolio standard.

In 2008, Ohio unanimously passed Senate Bill 221, a bill to require 12.5% of Ohio’s energy to be produced from renewable sources. It was an optimistic time for Ohio’s energy transition.

This optimism started to fray in the 2010s. In 2014, a group of state senators led by now-Congressman Troy Balderson pushed through a bill to freeze the standards in place until 2017. Balderson had originally called for a “permanent freeze” but had it changed to temporary after negotiations with the Kasich administration.

In 2019, Householder pulled off what Balderson couldn’t. With HB 6, he reduced the final goal for Ohio’s renewable energy to 8.5% and pushed back its date from 2022 to 2026.

What will this mean for Ohio? Scioto Analysis released a study in 2021 on carbon emission reductions which found that renewable portfolio standards could be as effective as a carbon tax or a cap-and-trade program at reducing carbon emissions in the state. 

We looked at two approaches: a renewable portfolio of 25% by 2026 based on Michigan’s renewable portfolio standard and a renewable portfolio standard of 80% by 2030 and 100% by 2050 based on Maine’s renewable portfolio standard.

We found that both approaches would be effective at reducing carbon emissions in Ohio and would both drive the global cost of these emissions down from over $45 billion a year to under $25 billion a year.

Reducing reliance on coal and natural gas for power would have ancillary benefits as well. Burning of coal releases contaminants into that air that can lead to respiratory illness. Extraction and production of natural gas can have similar impacts as well as lead to a range of other health impacts.

Use of fossil fuels also poses health equity problems for communities. A 2021 study published in Science Advances found racial-ethnic minorities in the United States are exposed to disproportionately high levels of ambient fine particulate air pollution (PM2.5) than white populations, further compounding inequities already experienced between racial groups. Since Black Ohioans already experience poverty at rates nearly three times as high as white Ohioans, this just compounds inequities in the state.

It’s not often that scandal is so closely wedded to policy, but right now Ohio is dealing with this biggest racketeering charge in history, which stemmed from negotiations over its biggest piece of environmental legislation of the century. Let’s hope we can learn from this and find ways to keep both our air and our politics clean.

This commentary first appeared in the Ohio Capital Journal.

How to be a bayesian policy analyst

By and large, humans are pretty bad at understanding probabilities. We live in a world of tangible things and real events which makes it hard to wrap our heads around the abstract concept of randomness. 

Among statisticians, there are two broad philosophies when it comes to thinking about uncertainty: frequentist statistics and bayesian statistics. 

Frequentist statistics assumes that there is some true probability distribution from which we randomly observe some data. For example, imagine we were flipping a coin and trying to determine if it was fair or not. 

A frequentist would design an experiment, and determine a hypothesis test. “I am going to flip the coin 50 times, and I will say the coin is not fair if I get less than 20 or more than 30 heads.”

From this frequentist experiment, we will be able to produce a p-value and a confidence interval. These tell us about the probability of observing our particular set of 50 coin flips, out of the universe of all possible sets of 50 coin flips.

On the other hand, Bayesian statistics relies on our prior knowledge about random variables as the foundation model. These prior assumptions can be the result of historical data or simply the statisticians' judgment.

Take the same coin flipping experiment, we no longer begin by defining a null hypothesis and some criteria to reject it. Instead, we begin with a prior distribution, perhaps assuming the coin is fair, and our goal is to create a “posterior distribution.” In this case, the posterior would be what is the probability the coin is fair.

Assume now that we ran our experiment and flipped 21 heads in 50 tries. In this case, the frequentist would say “We fail to reject the null hypothesis and can not conclude whether or not the coin is fair.” The Bayesian would instead find the posterior distribution, and be able to make a statement about the probability the coin is fair.

In short, the frequentist will never calculate the probability of a hypothesis being true. They will either reject it or fail to reject it. A Bayesian statistician will always begin with a prior guess about whether a hypothesis is true, and update that prior guess as they observe new information.

I personally find Bayesian thinking to be a much more satisfying way to conceptualize uncertainty. We all carry around some prior knowledge about how random things should play out, and as we observe new information we update those priors.

However, one thing to realize about intentionally thinking like a Bayesian is that it is often slow to incorporate new and dramatically different pieces of information. This can be good when the dramatically different information is an outlier that we don’t want to have completely determine how we think, but our priors can be slow to recognize when things do change dramatically.

In practice, I still mostly use frequentist statistics as part of my analysis. In practice, Bayesian analysis is often sensitive to the construction of the prior distribution and it is not always useful to a policy maker when the result of some analysis still has probability baked in.

Still, Bayesian statistics reminds us that we live in a world where things are uncertain. Especially in policy analysis, where we often try to predict future outcomes, it is important to remember that there is uncertainty and unlikely outcomes do not necessarily mean our predictions were wrong.

Ohio economists pessimistic about proposed school voucher program

In a survey of Ohio economists released this morning by Scioto Analysis, the majority of respondents disagreed or were uncertain that the proposed increase of Ohio’s private school voucher program would increase standardized test scores for Ohio’s students. “There are a lot of complex dynamics (some students may see higher scores but others may see lower scores) but almost certainly there will not be a significant effect on the average test scores in the state,” said Dr. Curtis Reynolds of Kent State. 

When asked if the proposed voucher increase would lower the quality of Ohio’s public schools, 11 economists agreed, four disagreed, and seven were uncertain. Dr. Kay Strong, who strongly agreed, said “the proposed upper threshold of $111,000 earning for a family of four assuming two are children will exacerbate the privilege of high income families at the cost of lower quality education from reduced public spending on traditional public school.” 

Since we surveyed the panel, the Ohio senate has introduced Senate Bill 11 which would create universal school vouchers.

The Ohio Economic Experts Panel is a panel of over 40 Ohio Economists from over 30 Ohio higher educational institutions conducted by Scioto Analysis. The goal of the Ohio Economic Experts Panel is to promote better policy outcomes by providing policymakers, policy influencers, and the public with the informed opinions of Ohio’s leading economists.

How to talk about statistics to non-statisticians

The final required class I took in grad school was called “Statistical Consulting,” and unlike every other course I took I did almost no actual statistics in it. Instead, this course predominantly focused on how to take all of the fancy methods and advanced concepts I had learned over the last two years and communicate the results to people without a background in statistics. 

As a policy analyst, I find myself communicating with non-statisticians more often than in school. This is an exciting opportunity to share the statistical tools I have with an audience that can meaningfully apply my results. These are some of the things I learned that have helped me better explain complicated concepts to a less technical audience.

Ask your audience for their statistical background

During my first ever consulting project as a grad student, I spent about 10 minutes of my first meeting going over the pros and cons of using a time-series approach with the client before they had the opportunity to ask me what a time-series model was. This is not to say that the discussion on the pros and cons of the time series approach was wrong or that the client didn’t need to be included in that conversation, but rather that had I known my client’s background I would have approached that discussion differently.

In this particular case, my client wasn’t interested in the statistical differences between the models I was proposing, but rather what the practical differences would be in the final report. After resetting, we were able to have a much more productive discussion that was tailored to his level of understanding. 

Use visuals when possible

Good data visualizations can communicate a complicated idea almost instantly, especially when paired with a clear written description of the main takeaways. However, it is important to not get carried away with visualizations. 

One common mistake is visualizing every possible part of an analysis. Visuals are great for highlighting the most important parts of a report. Highlighting everything might make it harder to find the most important pieces of information. 

Another consideration to have is when to use a graph or a table to visualize data. A general rule of thumb is to use tables when the specific value of a result is important and graphs when the broader trend is important. 

Provide context

Statistics never exist in a vacuum, and it is important to provide context in order to make your results more useful. Being upfront about what data or methods you used, what the strengths or weaknesses of your approach were, these are the sort of non-result pieces of information that help people understand the full picture of an analysis. 

Be honest

There are a whole lot of approaches we can use when analyzing data, and some of these approaches might allow for different interpretations of the results. Statistics sometimes gets viewed as a scientific and objective way to examine the world around us (which it largely is), but the perspective of the analyst has a lot of weight in determining the final message. 

Always make sure you are being clear about the assumptions your models make, and the limitations of your results. It goes without saying that intentionally misleading charts or excluding critical information has no place in any respectable analysis.

How do Americans spend their free time?

A central theory of labor economics covered in most introductory microeconomics courses is that of the “labor/leisure tradeoff.” This is the concept that workers have time they can spend working and time they can spend doing other things and that they will try to maximize their “utility” by achieving the optimal mix between labor and leisure.

Below is a visualization of the concept. You can plot a wage on a chart like this as a line that runs from the top left to the bottom right and a worker will choose how much to work based on where their wage line intersects the “indifference curve” (the blue lines) that is furthest out. An “indifference curve” represents all the points as which a worker would be “indifferent” to a different mixture between income and leisure. 

So a worker is just as happy at being at point A or point B who has the indifference curve IC1, though at point A she has more income and less leisure and point B she has more leisure and less income. Any point to the northeast of IC1 is preferable to any point on IC1 because it means more income and more leisure, less leisure compensated by much more income, or less income compensated by much more leisure.

This is of course a simplification. There are structural frictions in the labor market that can limit a worker from achieving the appropriate mix of labor and “leisure” that she desires. But empirically we do see this phenomenon playing out, with price increases like taxes leading to reductions in labor time in favor of “leisure” time. What I’m more interested in talking about here is something else: what we define as “leisure.”

“Leisure” time is defined by this model as time a worker spends doing something other than generating income. But is all of this rightly understood as “leisurely” activity?

One place we can look to find an answer to this question is the American Time Use Survey. The American Time Use Survey is a nationally representative survey conducted by the Census Bureau to determine how, where, and with whom Americans spend their time. It is the only federal survey providing data on the full range of non-market activities, from childcare to volunteering.

Looking at an overview of how Americans spend their time, the picture of “leisure time” gets a little more complex.

According to 2021 results, Americans spend about 22% of their time on average at work. They spend about 37% of their time on sleeping, 22% of their time on leisure and sports, 8% of their time on household activities (including travel), 6% of their time on care for children and parents, and about 5% of their time on eating and drinking.

So if we look at these results, we find that 42% of total time (54% of what the labor/leisure model calls “leisure time”) is spent on eating and sleeping, activities most people would deem essential to survival. Yes, there is a leisure component to eating and sleeping, but many people’s experience with food and sleep in the United States would not necessarily be called “leisurely.”

Another 14% of total time (18% of “leisure time”) is spent on household activities and caring for family members. This is what many would consider “non-market economic activity”--dollars aren’t changing hands, but they certainly could if these activities were outsourced from the household to a cleaning service, child care agency, or long-term care facility.

This only leaves 22% of total time, or 28% of time not worked, as time for “leisure and sports.” So only about a quarter of what we call “leisure time” is truly spent doing things we consider “leisure.”

When I was in graduate school, our economics professor eschewed the use of the phrase “leisure time” for the phrase “non-market time.” I think this is probably a better way to treat this time we spend sleeping, eating, taking care of family, caring for the house, and on leisure and exercise.

How can we prevent the next East Palestine?

Ohio is in the national news again, this time around a tragedy that sits at the crossroads of transportation, environmental policy, and public health. The high-profile train derailment at East Palestine, Ohio started as a shocking evacuation. It has since grown into an international story about the impacts of the derailment on the environment and the health of people near and far from the disaster.

In the wake of this event, policymakers have been asking for answers. Much of the public policy focus has zeroed in on a rule-making saga that has played out over the past few years at the federal level.

A couple of high-profile derailments in 2013 and 2014 led the Obama administration to issue a rule requiring electronic brakes for high-hazard, flammable trains in 2015. Three years later, the Trump administration repealed the rule.

Some believe this rule could have prevented the East Palestine derailment. Steven Ditmeyer, a former senior official at the Federal Railroad Administration, said “applying the [electronic] brakes would have stopped everything very quickly.”

So is that it: open and shut case, just need to require electronic braking systems, dust our hands off and go home?

Let’s not move so fast. What is really causing severe derailments in the United States? And is there something we can do about it?

The former question was asked by a researcher at the Department of Civil and Environmental Engineering at Rutgers and a team of international engineers analyzing train derailments in the U.S. The engineers analyzed over 20 years of freight derailment data in the United States to determine what was driving freight derailments across the country.

Let’s use these results to identify the root causes of train derailment. Four of the top ten causes for freight train derailment these engineers found were track-related factors. These included wide gauges, buckled tracks, track geometry, and most importantly broken rails or welds. 

Broken rails and welds led to not only the most severe derailments, but also was by far the most frequent cause of derailment. All in all, track-related factors made up three of the four factors that led to the most severe derailments. 

Only two of these top ten factors were related to rolling stock (the trains and cars themselves). These two factors were broken wheels, which were less common but led to more severe derailments, and bearing failure, which were more common but led to less severe derailments. Braking failure was notably absent from this top ten list.

While better brakes could have some impact on train derailments, if we can learn anything from history, it is that train derailments are most often directly caused by the most obvious factors: shoddy tracks and dilapidated trains.

Investment in fixing broken rails and welds, buckled tracks, track geometry, and wide gauges could be an effective approach to reducing freight train derailments. This could be done through grants, loans, regulation, or a combination of the three. 

Other areas of improvement are in ensuring cars are tip-top by maintaining wheels and bearings and making sure tracks are free of obstruction. These approaches could also be achieved through grants, loans, and regulation.

These strategies for reducing train derailment are not as sexy as electronic braking, but could be just as effective. Policymakers should use this as an opportunity to let the evidence guide better policy and prevent future disasters from occurring.

This commentary first appeared in the Ohio Capital Journal.

5 charts to better understand poverty in Franklin county

Poverty is a perennial topic in public policy. In order to make good policy decisions about poverty, we need to understand what poverty is and what it means to the people that experience it. 

Since Scioto Analysis is headquartered in Franklin County, I thought it would be valuable to share some information about poverty in a U.S. urban area using Franklin County as a case study. Here are five charts that help contextualize the current state of poverty in Franklin county. 

Poverty rate over time

The past decade of poverty trends in Franklin County has been defined by recovery from the Great Recession. The highest reported poverty rate in two decades before the 2008 recession according to the St Louis Federal Reserve was 16.4%, but county poverty rates topped out at nearly 19% in the years after the Great Recession. It took over 10 years for poverty to return to pre-recession levels. 

Another takeaway from this chart is that the county did not take the same hit during the COVID-19 recession as it did during the 2008 recession. Part of this was due to temporary assistance issued during the pandemic, which was successful in keeping poverty rates down. Programs like the expanded child tax credit were also crucial in keeping people afloat, but are post-tax income so not included in official poverty measure calculations. This means something more was at work during the COVID recession. 

Poverty rates by race

There are significant differences in poverty rates by race in Franklin county. Black and Native American residents of Franklin County are more than twice as likely to be in poverty as white residents. Hispanic/Latino residents are nearly twice as likely to be in poverty as white residents as well. White and Asian residents are the two groups that experience poverty at lower rates than the county as a whole. 

Total poverty by race

Despite the fact that white residents experience poverty at the lowest rate of any racial group in Franklin county, the majority of people in poverty in the county are white. This is because white non-hispanics make up 61% of the total population according to the 2020 census

The difference between absolute poverty statistics and relative poverty statistics demonstrate why it is important to look at all of the context when talking about poverty. They seemingly tell different stories about what poverty is like in Franklin county, but in truth both are needed to understand the complete picture. While white residents of Franklin County tend to be less poor than Black and Hispanic residents, the sheer number of white residents of the county means that most people who are poor in the county are white.

Employment and poverty

Unsurprisingly, people who are employed experience less poverty than those who are unemployed. However, when talking about unemployment, we often overlook the problem of underemployment. The official unemployment rate counts underemployed people as employed, meaning that we sometimes overestimate the strength of the job market when we lean on that measure. 

This graph shows the importance of underemployment on poverty. There is a massive gap in the poverty rate between people who worked full time and those who were unemployed, but importantly the part-time/part-year workers experience poverty at a rate much closer to the unemployed group. This demonstrates that not all jobs are equal, and if we want to have some impact on poverty through the labor market we need to make sure that people have enough quality employment. 

Education and poverty rates

In Franklin county, there is a strong positive correlation between education and poverty rates. An individual without a high school diploma is almost twice as likely to be in poverty than someone with a high school diploma, and seven times more likely to be in poverty than someone with a bachelor’s degree.  

Charts like this suggest potential policy paths for alleviating poverty. Presumably, if we were able to increase the rate at which people finished high school or made equivalents easier to access, we could make more people less likely to experience poverty. We also have insights on how race, employment, and long-term trends impact poverty.

What is Monte Carlo Simulation?

As a statistician, I tend to be inherently skeptical anytime I see a single number reported as a prediction for the outcome of a policy proposal. Point estimates are certainly useful pieces of information, but I want to know how likely these predictions are to come true. What is the range of possible outcomes?

In policy analysis, it sometimes seems impossible to certainly tell what the range of possible outcomes might be. What if one input in our predictive model is higher than we think, but another is lower than we think? What if there are so many inputs that it would be practically impossible to test for all of the different possible outcomes? 

It is times like these when we can turn to Monte Carlo simulations to estimate our variance. From Scioto’s Ohio Handbook for Cost Benefit Analysis: “The essence of Monte Carlo simulation is to generate a large number of possible outcomes by varying all the assumptions in the analysis.” By changing all of the inputs at once over thousands of trials, we can more accurately measure the uncertainty in our predictions. 

At their core, all statistical models are essentially just mathematical equations. Imagine we are considering building a new public swimming pool and want to conduct a cost-benefit analysis. In one model, the costs would be the construction and annual maintenance costs, and the benefits would be the average benefit per person multiplied by the number of people that we expect to use the pool. 


Benefit per Person x Expected Visitors - Construction Cost - Annual Maintenance = Net Benefits


The four inputs to this model are not fixed values, but instead are random variables. We can use observed data to create sensible estimates for these random variables, but at their core they are not deterministic. 

We can define a random variable by its probability distribution. A probability distribution conveys two critical pieces of information: what all possible outcomes are and how likely those outcomes are. Once we have real data, we can say that we have an observation or a realization of a random variable. Observations and realizations are associated with a random variable and a probability distribution, but they are not random themselves. 

Let’s apply this to our swimming pool example. To estimate the number of visitors our pool will have, we can collect data about the number of visitors other public pools have. To create a point estimate, we might take the average value of our observations and plug it into our formula. Repeat that process with the other three inputs and you have a basic cost benefit analysis.

However, if we assume that the number of visitors to public pools are all observations of the same random variable, then we can make some claim about the distribution of that random variable. An in-depth knowledge of statistics is needed to make and verify these distributional assumptions, but the point is that we are defining all the possible outcomes and how likely those outcomes are.

With the probability distribution defined, we can use statistical software to generate thousands of observations that all follow the same distribution. This gives us thousands of inputs into our equation and thousands of different results, meaning we can now analyze the range of possible outcomes. 

Monte Carlo simulation really begins to shine once we start defining the probability distributions for all the random variables in our equation. It helps at this step to think of the Monte Carlo simulation as happening in rounds.

In a single round of simulation, we generate an observation for each of our four random variables from their respective probability distributions. One round might have above-average values for the number of visitors and the benefit per visitor but below-average values for the cost variables, leading to well-above-average net benefits. Repeating this for a few thousand rounds will allow us to accurately see the range of possible outcomes and more importantly the likelihood of each potential outcome. 

These simulations often involve a lot of assumptions about the distributions of random variables. Understanding and checking these assumptions is required in order to generate meaningful results. 

Monte Carlo simulation is a powerful tool for analysts to use. It goes beyond just offering a single estimate for a prediction, and provides deeper insight into the likely range of outcomes. If you have the time and the statistical background to perform a Monte Carlo simulation, doing so can dramatically improve the quality of your estimates. 

How do I conduct sensitivity analysis?

When making an estimate of a cost, economic benefit, or some other important policy impact, a policy analyst is carrying out a very difficult task. She is trying to put numbers to things that we often don’t see as quantifiable. Inevitably, any analysis will lean on a range of assumptions in order to get from abstract idea to a number. 

But what happens when we change these assumptions? And what happens if the empirical evidence we use is a little bit off from reality? This is where the policy analyst employs the important tool of sensitivity analysis.

Sensitivity Analysis is the process of estimating how assumptions included in an analysis impact the uncertainty around the findings of the analysis. By conducting sensitivity analysis, we can get an idea of how precise our findings are. We can also report these findings as a range so we don’t oversell the precision of our findings.

But how do we conduct sensitivity analysis? There are a few different ways to do this, but the most common approaches are what we call “partial sensitivity analysis,” “worst- and best-case analysis,” “breakeven analysis,” and “Monte Carlo Simulation.” Below are some explanations of the methods for conducting sensitivity analysis as laid out in the Ohio Handbook of Cost-Benefit Analysis.

Partial Sensitivity Analysis

Partial sensitivity analysis is in some ways the most basic of sensitivity analysis techniques. This technique is carried out by taking one key input and varying it to see how it impacts the results of the study. By showing how one factor impacts the outcome of a study, a policymaker can understand the risks involved in relation to a key factor. 

An example is an input like the “value of a statistical life,” a valuation that has a range of different values depending on what agency is carrying out the analysis. In a study that includes an important input with varying valuations like the value of a statistical life, showing the net present value of the program under different assumptions for the value of a statistical life gives an insight into the variability of the results.

Worst- and Best-Case Analysis

Let’s move on from varying one input to varying multiple inputs. Sometimes when we conduct a cost-benefit analysis or another policy analysis, we may come up with a result that is very optimistic or very pessimistic. In this situation, we can conduct sensitivity analysis to test how reliant our results are on our assumptions.

To carry out this form of a sensitivity analysis, an analyst takes all the inputs and sets them to the most optimistic or pessimistic reasonable assumptions. This allows her to communicate to a policymaker what the policy’s outcomes would look like if all her assumptions are pessimistic and what they would look like if all her assumptions were optimistic. This process also allows her to test her assumptions and see if changing them impacts the ultimate findings of the analysis, determining if it has net costs or benefits under all circumstances or if it ends up above or below water depending on assumptions.

Breakeven Analysis

Some analyses are fixated particularly on finding if a policy has a positive net present value or a benefit-cost ratio above one. In these situations, it is useful to see how much assumptions need to change in order to see when benefits and costs “break even.”

Breakeven analysis is the process of varying assumptions to see where costs would equal benefits. This gives policymakers an understanding of how much the assumptions need to vary from their expectation for the policy’s benefits to exceed costs.This can be modified to broader policy analytic techniques if an approach is created to measure impact against each other: see what needs to be changed in order for two alternatives to be basically the same from an analytic perspective.

One way breakeven analysis can be useful is if the results are particularly robust. For instance, if a policy has a positive net present value unless the average wage for people carrying out the program exceeds $1 million per year, then that would suggest confidence that the net present value of the program is indeed positive.

Monte Carlo Simulation

Sometimes called the “gold standard of sensitivity analysis,” Monte Carlo Simulation is a more complex sensitivity analysis technique that requires data analysis software. The essence of a Monte Carlo simulation is to generate a large number of possible outcomes by varying all the assumptions in the analysis. Using these outcomes, confidence intervals for cost-benefit outcomes can be estimated. Advanced Microsoft Excel users can execute a Monte Carlo simulation with a little bit of help from Google.

Even conducting a simple partial sensitivity analysis can provide useful insights for a policymaker. The point of sensitivity analysis is to estimate precision of estimates, and using any of the above techniques makes an analysis more complete.

DeWine child tax deduction leaves poor families out

At his “state of the state” address on Tuesday, Ohio Gov. Mike DeWine put forth a unique proposal to the Ohio legislature — to enact a $2,500 per child state tax deduction.

When I first saw this, I was excited! The 2021 federal child tax credit expansion lifted over 2 million children out of poverty. After Joe Manchin torpedoed efforts to preserve the tax credit, a number of states moved to create state versions of the effective anti-poverty program. Could Ohio become the thirteenth state to enact a child tax credit?

But then I took a look closer — wait, this wasn’t a state tax credit that DeWine proposed, but a tax deduction.

How a credit works is that you get back money that you owe, sometimes in excess of what you owe. So a $2,500 tax credit would put $2,500 in the pocket of a taxpayer.

A deduction just means that you can subtract that amount from your income to calculate your taxes. So if you make $50,000 in a year and you subtract $2,500 from that, you pay taxes on $47,500 in income. With Ohio’s tax at that bracket of 3.226%, that means you’d save about 80 bucks.

A problem with a tax deduction from an equity standpoint is that they help out wealthy people more than low-income people. This is because the higher your income gets, the more you have to pay in income tax. So those deductions help wealthy families more than low-income families.

What does this look like in practice? Well, let’s look at some examples of how this could shake out for families. 

Ashley Smith is a single mother of two from Jacksontown, Ohio, a small town northeast of Buckeye Lake. She makes an average amount for Jacksontown: a little over $24,000 a year. She would save zero dollars from the proposed child tax deduction.

Jessica Miller is a single mother of two from Dayton. She makes an average amount for a family in Dayton: a little over $43,000 a year. She would save a little under $140 with the proposed child tax deduction.

Amanda Johnson is a single mother of two from Chillicothe. She makes an average amount for a family in Chillicothe: a little over $66,000 a year. She would save a little over $160 with the proposed child tax deduction.

Sarah Brown is a single mother of two from Centerville, Ohio, a suburb of Dayton. She makes an average amount for a family in Centerville, a little over $100,000 a year.  She would save a little over $180 with the proposed child tax deduction.

Brittany Williams is a single mother of two from New Albany. She makes an average amount for a family in New Albany, nearly $208,000 a year. She would save nearly $200 with the proposed child tax deduction.

So Brittany, supporting a family on over $200,000, will receive a $200 benefit from the child tax deduction. Meanwhile, Ashley, supporting a family on $24,000, will receive nothing. Nearly all families in poverty will receive nothing from this benefit, while middle- and upper-income households will receive $70 to $100 per child, with benefits higher for families in higher tax brackets.

If the DeWine Administration wanted to help children in poor families, it could create a better system by providing a credit per child. The administration could even create an income cutoff to keep costs low by targeting benefits toward lower-income households. The current proposal, on the other hand, excludes the families that need help the most while giving the largest breaks to the most well-off Ohio families.

This commentary first appeared in the Ohio Capital Journal.