Federal data is vital to good policy analysis

Last week, Robert Santos decided to step down as the director of Census Bureau. This decision comes at a time when the Trump administration is making a concerted effort to remove public data from government websites due to a perceived connection to diversity, equity, and inclusion. 

The Trump administration’s war on DEI practices has escalated. It is no longer about stopping workplace sensitivity trainings or changing hiring practices, it is instead focused on changing the way people in this country understand people that are different from them. 

Things are changing quickly, and it’s difficult to predict what the extent of these data removals will be. For example, the Center for Disease Control’s Behavioral Risk Factor Surveillance System (one of the best publicly available health databases) was temporarily taken down. As I’m writing this on Wednesday, February 5, parts of it are back online. 

I imagine that most people who read blog posts on sciotoanalysis.com understand the importance of this kind of data, but the severity of this situation is worth repeating. Policymakers, non-profit organizations, academic researchers, and others rely on the availability of high-quality public data. 

Fortunately, researchers have been working  to download as much data as possible in the event things do start to get taken offline. We won’t completely lose the data that already exists, even if we might not be able to use new data going forward. 

That being said, it is hard to imagine that over the next four years, the newly collected public data is going to be of the same quality. If I had to guess, I would say that we probably aren’t going to have nearly as precise data on things like gender, race, sexuality, etc.

It might be the case that those questions are removed entirely from government-run surveys like the American Community Survey. I am optimistic that those surveys won’t entirely disappear. Much of this optimism comes from the fact that I can’t even begin to imagine how damaging it would be to have the decennial census be the only piece of publicly collected national data.

Still, a lot of harm will come from failing to collect adequate data. One example that comes to mind is thinking about how the Census Bureau records race data. Currently, there are only six race categories: White, Black, Native American, Asian, Pacific Islander, and Other. This does not do a good job of capturing the different racial identities people in the United States have. 

The Census Bureau decided in 2024 that it would add North African/Middle Eastern as a racial category, an important change that would have helped improve our understanding of racial dynamics. I don’t know if the current version of the American Community Survey is already in progress with this updated race question, but I would be very surprised if this data ever became public. 

As a researcher, it’s hard not to be discouraged by the thought of losing so much data over the next four years. But I want to end on a slightly more optimistic note. 

Yesterday, I was talking with a community group of data scientists. Near the end of our chat, this topic came up. While people were still extremely nervous about whether or not we would continue to have access to high quality data, the people in that meeting were still very optimistic. People were sharing the data repositories they knew about that wouldn’t be taken down and talking about how we might address these shortcomings going forward. 

Right now, we don’t know to what extent we are going to lose data. As statisticians, it is our job to ensure that going forward, we don’t leave groups behind just because we are losing data about them. We likely won’t be able to rely on the Census Bureau as much in the short-term, but we can still explore effectiveness, efficiency, and most importantly right now, equity.

Ohio economists: school funding cuts could increase inequality

In a survey released this morning by Scioto Analysis, 13 of 16 economists surveyed agreed that cutting school funding in Ohio would significantly increase inequality in Ohio. Over the last four years, Ohio's public schools have relied on the Cupp-Patterson Fair School Spending Plan to provide over $300 million in funds per year. This bill was initially supposed to provide funding through 2027, but the new Speaker of the House has indicated that he would like to cut this funding. If these funds are cut, it would reduce spending on public schools by about $650 million over the next two years.

As Kathryn Wilson from Kent State points out “The purpose of the funding was to have less reliance on local property taxes. There are large differences in per-student-spending across districts within Ohio. Reducing this funding will increase those gaps and increase inequality.”

Additionally, 13 of 16 economists agreed that these spending cuts would significantly reduce Ohio’s future economic output. Bill LaFayette wrote in his comment “School spending is not an expenditure, it is an investment in our future workforce. If we don't have the revenue to support our schools, colleges, and universities adequately, perhaps we should rethink some of those tax cuts.”

The Ohio Economic Experts Panel is a panel of over 40 Ohio Economists from over 30 Ohio higher educational institutions conducted by Scioto Analysis. The goal of the Ohio Economic Experts Panel is to promote better policy outcomes by providing policymakers, policy influencers, and the public with the informed opinions of Ohio’s leading economists. Individual responses to all surveys can be found here.

What can we learn from airport data?

We are currently working on a project at Scioto Analysis where we are using data on the number of people traveling through airports to make some tax revenue estimates. That project is interesting for a whole host of reasons, but one especially fun part has just been poring over this airport data.

We compiled data from Ohio and North Carolina on the monthly passengers flying into every airport in the state from 2002 to 2024. Since these numbers are a little difficult to pull out and use, we thought it would be valuable to share some of the observations we had about this dataset.

The COVID-19 impact has faded

The lockdowns that happened in 2020 as a response to the COVID-19 pandemic had a major impact on all forms of travel. For most airports, this meant a drop in the total number of passengers arriving by 60% - 70% in 2020 compared to 2019. 

In recent years, most airports have returned to their pre-pandemic levels of visitor traffic. For most airports, they achieved this in 2023. During 2024, we saw some of the regular growth that more closely follows the trends before the pandemic. 

This will be an interesting statistic to track going forward. We can learn a lot by understanding how people move around. 

Cincinnati was a major airport in the early 2000s

In 2004, Cincinnati had multiple months where over 1 million people passed through the airport there. By 2010, monthly visitors were down to under 400 thousand. The Cincinnati airport has hovered around this same number of monthly visitors since then, keeping it in line with the airports in Cleveland and Columbus. 

The main reason for this change was the merger between Delta and Northwest airlines in 2008. After the merger, Delta cut flights to Cincinnati. This had such a significant impact on the airport that it closed one of its terminals in 2009.

What drives airport traffic?

In our analysis, we are specifically comparing air travel between Ohio and North Carolina. Looking at these states, you see that North Carolina has had over double the total number of air passengers landing in their airports compared to Ohio over the past couple of decades. Most of this is driven by the airport in Charlotte, which by itself nearly matches the combined monthly passengers landing in Cincinnati, Cleveland, and Columbus. The Raleigh-Durham airport also has about 50% more traffic than any of the large airports in Ohio.

So, why is it that two states that have very similar populations have such different air traffic numbers?

One thought I had was that North Carolina might have more tourists than Ohio. The intuition behind this is that tourists often travel further distances to get to where they are trying to go, and as such may be more likely to travel by plane. The issue with this is that according to the Ohio Department of Development and the North Carolina Department of Commerce, Ohio has more tourism spending than North Carolina!

This doesn’t mean that tourism is not part of the reason why North Carolina has more air traffic than Ohio, but it probably means it’s not the main explanation. 

One thing that could be driving this is people flying to Charlotte to get to destinations in South Carolina. This is purely speculation on my part, but there must be some reason why there are so many more people flying to North Carolina than Ohio.

Is Massachusetts ready for a candy tax?

Last week, Massachusetts Governor Maura Healy released her budget proposal for the upcoming year and she has an unlikely commodity in the crosshairs with her new budget: candy.

Her new proposed budget includes a provision to remove the exemption for candy from the state sales tax, making it no longer a nontaxable item and subject to state sales tax.

While a tax on candy may seem like a frivolous matter, there is more at stake here than jobs for the state’s Oompa Loompas. CBS News reports removing the sales tax exemption for candy could raise $25 million for the state to spend on investments the governor wants to make in transportation and education.

The search for new sources of revenue is likely the origin of this policy change. Needling around the tax code and finding politically palatable ways to raise tens of millions of dollars of revenue is no easy task, so when a policymaker finds an opening, she often is willing to take it to raise revenues for programs she wants to fund.

That is why the governor says “this isn’t about a new tax.” This is “about” expanding the definition of what is funded under a tax that Massachusetts already has and a tax that is probably the broadest tax in the state: the state sales tax.

Despite the political motivation of the policy change, conventional wisdom among economists is that broader taxes are better for the economy. An exemption of something like candy from the state sales tax creates an incentive for people to purchase it relative to other goods that are subject to the state sales tax. If the state sales tax applies as widely as possible to many forms of purchases of goods and services, the incentives will be less distortionary and the economy will function most efficiently.

This is why many economists are wary of sales tax exemptions like exempting tampons from taxation–they narrow the tax base and create distortions in the economy.

But sales tax exemptions can have certain outcomes that policymakers may desire. One of them is offsetting the regressivity of sales taxes. Sales taxes fall more heavily on low-income households than on upper-income households since low-income households spend a larger percentage of their income on consumption. Carving essential goods like groceries, healthcare, and utilities out from sales taxes can help address this regressive aspect of a sales tax.

Candy falls under this category. A study by researchers at the University of Michigan and the University of Alabama-Birmingham found people living in lower-income neighborhoods and in areas without local food stores eat more snacks and sweets than those in higher-income areas. This means that subjecting these goods to taxation will likely mean the tax will fall more heavily on low-income households than on high-income households.

Those who are wary of specific exemptions to sales taxes argue that a better way to deal with the regressivity problems of a tax is to offset it with a rebate system. If you fund programs like an earned income tax credit with sales taxes, you can give cash to low-income people without giving an exemption to all families. This can be a more efficient way to handle the regressivity of a sales tax than exempting categories of goods from the tax altogether.

Another consideration with the taxation of candy is the public health impacts. Ten years ago, many local jurisdictions were making headlines for taxing soft drinks, citing their public health impacts. Making candy subject to sales taxes could be a way to reduce the public health impacts of candy.

Some researchers have found that dental guidelines around avoiding foods that stick to your teeth have little impact on consumers because consumers do not know which foods stick to their teeth and which do not. Candy consumption can also be a cause of weight gain, which can lead to a host of health conditions that can threaten lives. Candy and sugar consumption can also lead to cardiovascular disease and can even lead to nutrient deficiencies.

Correcting for these effects of candy consumptions could be seen as promoting economic efficiency if they lead to better health outcomes. If parents are feeding candy to children and that is leading to tooth decay, weight gain, cardiovascular disease, and nutrient deficiencies, then levying a tax on candy could help shift spending away from candies toward substitutes with less negative health impacts. Previous work on taxes on soda have shown these taxes lead to reductions in consumption.

Another way removing the exemption for sales taxes for candy could promote economic efficiency is by curbing economic internalities, or irrationalities people have that lead them to make choices they would not make in sound mind. If candy consumption is linked to compulsive behavior, then taxing it could help bring people’s actual consumption of candy closer to their clearheaded goal consumption levels.

Similarly, a tax could help correct information problems. If information tools like nutrition labels are not effective at helping people understand the health impacts of candy, then a tax could be a more effective way at reducing consumption to levels that someone would consume at with better information.

Another consideration for policymakers weighing a candy tax is the administrative feasibility of such a tax. The definition of “candy” is slippery and a line will have to be drawn at some place to define certain foods as “candies” and other foods as “not candies.” Where this line is drawn will impact revenue, the efficiency of the policy, the effectiveness of the policy, the equity ramifications of the policy, and the public health impacts of the policy.

From the perspective of an analyst, I am curious to see what the impacts of this policy will be. Will this increase in taxes lead to precipitous decreases in candy consumption? Will there be detectable public health impacts? Will revenues raised exceed expectations? All of these are questions economists should be prepared to answer if this policy change goes into effect. Because this is where the rubber meets the road in public policy analysis: determining how changes like this impact real people’s lives.

Original Research: Bicycling and Trails Contribute $1.4 Billion to Iowa’s Economy

On Saturday, the Iowa Bicycle Coalition and Scioto Analysis released a report highlighting the economic and health impacts of bicycling and trails across Iowa. Analysts estimate in the report that bicycling contributes $1.4 billion annually to the state’s gross product, supports over 21,000 jobs, and generates $690 million in wages.

The study, conducted with data from over 2,500 surveys and data from government and nonprofit organizations, investigated the economic and health impact of bicycling and trails in the state.

Iowa has over 2,000 miles of multi-use trails and approximately 900,000 state residents cycle each year. The report underscores that cycling and trail use is not just a leisure activity but also an economic driver for the state. Retail trade and food services emerge as the biggest beneficiaries, with recreational riding alone accounting for over half a billion dollars in retail trade revenue annually.

“Bicycling is not only a leisure activity. It’s also a contributor to a strong economy and promotes public health,” said Rob Moore, Principal for Scioto Analysis. “From creating jobs to promoting healthier lifestyles, the benefits of bicycling accrue to communities and businesses across the state.”

Analysts found evidence that cycling reduces obesity rates and leads to fewer cases of diabetes, high blood pressure, and deadly cancers, saving millions in healthcare costs and saving lives. It also reduces incidence of excess BMI and promotes mental health. Bicycle commuters promote a more eco-friendly commuting lifestyle, preventing as much as 1,500 tons of carbon dioxide from being released in a given year.

The report demonstrates the value of sustained investments in cycling infrastructure to enhance safety and accessibility, particularly as Iowa ranks in the bottom ten states for cycling safety and laws. If cycling is supported and safe, the economic, health, and environmental benefits it contributes will only grow.

Mapping concentrated poverty in Minneapolis-St. Paul

Back in 2019, my colleague Rob wrote a blog post looking at areas of high poverty in Columbus. This blog post still resonates with a lot of people who understand the history and context of Columbus, and can see how the current concentration of poverty reflects the history of the city. 

Since I grew up and now currently live in Saint Paul, Minnesota, I wanted to do this exercise with my own city. I plotted each census tract in Ramsey and Hennepin counties (the home of the cities of Saint Paul and Minneapolis, respectively), highlighting the tracts with a poverty rate of at least 20%. You can see the whole map here.

I-94

Anyone who knows the history of Saint Paul knows that the construction of I-94 in the late 50s completely changed the fabric of the city. The most famous example of this disruption is the demolition of the Rondo neighborhood, one of the few predominantly Black communities in the city. Today, much of the poverty in Saint Paul follows the I-94 corridor, largely on the North side. The Saint Anthony Park, Midway, Frogtown, and Payne-Phalen neighborhoods all have multiple census tracts with over 20% poverty rates. 

In Minneapolis, the I-94 corridor was built through the heart of the Black community in the North neighborhood. Both North Minneapolis and the Rondo neighborhoods were historically redlined parts of the Twin Cities. Like the old Rondo neighborhood in Saint Paul, the disruption caused to North Minneapolis by the creation of I-94 is still felt to this day. That legacy of disruption has resulted in areas of high poverty for many people. 

West Side

Confusingly, directly South of downtown Saint Paul lies the West Side neighborhood (named because it is on the West bank of the river). This neighborhood currently is home to one of the city's largest Mexican populations, and is one of the few neighborhoods of concentrated poverty that is not along the I-94 corridor. 

Seward to Uptown

South of downtown Minneapolis, there is a band of high poverty neighborhoods that stretch from the Seward neighborhood near the river, through the Ceder-Riverside neighborhood, and almost to Uptown and Lake of the Isles. There is a sharp cutoff at Lyndale Avenue, where people living in the Whittier neighborhood to the East experience much higher rates of poverty than their Uptown neighbors to the West. There is an especially sharp disparity when we look at the people who live immediately around Lake of the Isles, one of the wealthiest neighborhoods in Minneapolis.

University of Minnesota

The part of Minneapolis that is on the East bank of the Mississippi river is largely divided into two chunks separated by I-35W. There is Northeast Minneapolis and its many neighborhoods in the North, and the neighborhoods surrounding the University of Minnesota in the South.

Of these two sections, there is much more concentrated poverty surrounding the University campus to the south. These neighborhoods also fall along the I-94 corridor, creating a continuous network of high poverty areas that stretches from downtown Saint Paul all the way to North Minneapolis. 

Suburbs

The majority of the suburbs surrounding the Twin Cities have very low poverty rates. Still, there are a few census tracts that have poverty rates over 20%. There are neighborhoods in Bloomington, Eden Prairie, Saint Louis Park, and Brooklyn Park that all have very concentrated poverty rates. 

Ohio’s Medicaid work requirements are back from the dead

With a new administration in Washington, state policymakers are reviving their zombie policy of Medicaid work requirements.

Work requirements have been a bit of a political football over the past couple of administrations. While food and income went from being a right for all to a right for all who work during the welfare reform era of the 1990s, health insurance managed to dodge that bullet. During the Obama Administration, however, congressmen and state leaders began kicking around the idea of making health insurance subject to a requirement to work, especially for a newly-covered category of low-income adults without children.

This policy became a reality in states across the country in 2018 as the Trump Administration began issuing waivers to states looking to impose work requirements on Medicaid recipients. A total of 13 states including Ohio were granted waivers and authority to revoke health insurance from low-income adults without children who could not prove they met requirements for paid work hours.

Judges started striking these requirements down soon after they were imposed before the Biden Administration rescinded and withdrew the waivers that allowed them. With the incoming Trump administration, though, supporters of Medicaid work requirements are bullish on the opportunity to reimpose requirements on health insurance recipients.

The imposition of work requirements during the Trump administration gave us a perspective on what the impact of these work requirements were on Medicaid recipients. If work requirements are reimposed, we can expect people to lose their health insurance and for it to threaten their economic security.

According to the Congressional Budget Office, the state experiments of the Trump Administration showed tepid results. Many people lost health insurance coverage under the requirements and there was little evidence that people worked more because of them.

Arkansas was the only state that kept the requirements in place for more than a few months, with their requirement in place for almost a year before being struck down by a federal judge. Over that time period, about a quarter of people subject to the requirement lost their Medicaid coverage.

Researchers studied the employment impact of Arkansas’s work requirements through surveys of people subject to it. One statistical approach found a small increase in employment, but not enough to be considered statistically significant. The other found a small decrease in employment among people subject to the requirements.

The study suggested work requirements also made households less stable. The researchers said the proportion of adults subject to the requirement with serious problems paying their medical bills doubled after institution of the requirements.

Why did enrollment go down and employment not budge under the Arkansas work requirement? It seems that much of the problem was administrative. A third of people subject to the requirement reported they were not aware of the new requirement. Analysis by the Kaiser Family Foundation suggests reporting requirements was the primary driver of failure to meet requirements: many people who lost coverage were working enough to qualify but administrative red tape pushed them off the program.

Medicaid is not cheap, but it is a powerful tool for creating financial stability in low-income households. For the 780,000 low-income Ohioans who may be subject to these work requirements, this extra red tape could be the difference between financial stability and crippling medical debt.

This commentary first appeared in the Ohio Capital Journal.

What is cost-benefit analysis?

“Cost-benefit analysis” is a phrase nearly everyone has heard at some point in their lives. Even if it was as simple as getting advice when making a hard decision to write down all the pros and all the cons of making a decision, you have likely come across some sort of deployment of the concept of cost-benefit analysis in the past.

In the public policy world, “cost-benefit analysis” has a much more specific definition. Cost-benefit analysis is the systematic inventorying of the benefits and costs of a public policy and expression of these benefits and costs in monetary terms. There are widely-accepted textbooks on how to conduct cost-benefit analysis, executive orders governing how cost-benefit analysis is to be carried out on federal regulatory actions, and even an entire association of practitioners of the technique that holds a conference in Washington D.C. every spring.

Most scholars of cost-benefit analysis trace its theoretical roots to late 19th- and early 20th-century welfare economics. In particular, Arthur Pigou’s welfare economic framework laid the groundwork for cost-benefit analysis. In Pigou’s eyes, the economy can be understood as a system of preferences and preference fulfillment across a community. Transactions generate private costs and benefits and social costs and benefits. The sum of the benefits and costs across all transactions in a society comprises the totality of the economy.

The implication of Pigou’s understanding of the economy is that incentives can be realigned to grow the size of the economy. So if an economic activity has public benefits, it can be subsidized so the private benefits align with public benefits, increasing the activity and growing the economy. Public education is a great example of this. Without subsidy, public education will be underprovisioned. Subsidies bring private benefits in line with public benefits, growing the overall size of the economy. Similarly, activities that create public costs like pollution can be taxed to reduce their prevalence.

Cost-benefit analysis first made its appearance in U.S. federal policy around the turn of the century when large public water infrastructure programs were required to be subject to evaluation by the Army Corps of Engineers by the Rivers and Harbor Act of 1902. This approach was more of a fiscal accounting exercise rather than an economic analysis exercise at first, but by the 1950s, economists were bringing welfare economics into the practice, bringing public costs and benefits into the limelight.

The watershed moment for cost-benefit analysis in the United States occurred in 1981, when Ronald Reagan issued an executive order requiring regulatory impact analysis on major initiatives. Cost-benefit analysis became the standardized practice for major rulemaking throughout the federal government and agencies across the government started applying the practice to a range of different issue areas. His original executive order has been reaffirmed by six presidential administrations of different parties, including major revisions under George W. Bush and Joe Biden.

Okay, so that’s the history of the practice. But what actually is cost-benefit analysis?

Unfortunately, there is not a single, agreed-upon answer to this question. Some could say if a policy analysis follows what is in Boardman et al’s textbook on cost-benefit analysis that it qualifies. Others could use federal Circular A-4 as a guidance. The Society for Benefit-Cost Analysis has not adopted formal standards for cost-benefit analysis.

We have written a handbook on cost-benefit analysis that we have created as an approachable guide to the practice for analysts interested in conducting cost-benefit analysis at the state and local level. Within this handbook, we cover the eight elements of a cost-benefit analysis as laid out in a Pew Research Center study of the use of cost-benefit analysis in the states. The elements we identified are the following.

The study comprehensively measures direct costs. Direct costs of a policy, often measured as the government outlays required by a policy, should be included in any comprehensive cost-benefit analysis. This is usually what comes to mind when we talk about “costs” of a public policy.

The study comprehensively measures indirect costs. Outside of the direct costs, policies often have spillover costs associated with them. For instance, a recent cost-benefit analysis Scioto Analysis conducted on recreational marijuana legalization considered the productivity costs the economy would incur due to increased use of marijuana in a state.

Tangible benefits are monetized to the extent possible. A good cost-benefit analysis will not only quantify benefits, but will monetize them. This means trying to estimate how benefits translate into dollar figures that measure the strength of preferences people impacted by the policy have for benefits and costs associated with the policy.

Intangible benefits are monetized to the extent possible. While tangible benefits such as financial savings, increased revenue, or time savings should be monetized, a cost-benefit analysis should also include monetization of intangible benefits, like quality of life, environmental preservation, and knowledge and learning.

Program costs and benefits are measured against alternatives or a baseline. Cost-benefit analysis is ultimately a specific form of policy analysis. This means that a cost-benefit analysis should measure itself against another alternative. A standard alternative to measure against is “status quo,” or what would happen if the policy were not put in place.

Future costs and benefits are discounted to current year values (net present value). A dollar today is not worth the same as a dollar a year from now. Because of the investment value of that dollar and the uncertainties of the future, a dollar today is worth more than a dollar a year from now. A good cost-benefit analysis discounts future impacts to account for this.

Key assumptions used in calculations are disclosed. Assumptions in analysis are inevitable. A good cost-benefit analysis makes as many of these assumptions as clear as possible.

Sensitivity analysis is conducted to test how the results would vary if key assumptions were changed. Sensitivity analysis is the best tool we have to deal with the contingencies of uncertainty in an analysis. A good cost-benefit analysis will subject its assumptions to sensitivity analysis and a great cost-benefit analysis will conduct a simulation like a Monte Carlo simulation to test many of its assumptions in tandem.

This definition may change over time, but as far as I am concerned, this is the best definition of “cost-benefit analysis” I have found to date. If this sort of analysis were used in more public decisions, policymakers would come to those decisions armed with much more information than they currently do. And with that information, they will be better prepared to make the difficult decisions that constitute public policy.

Scioto Analysis is hiring!

We are no longer accepting applications for this position. Sign up for our newsletter at the bottom of this page to be informed when new positions are available.

Today, we began our search for a new policy analyst for Scioto Analysis.

I am going to be frank with you: our goal at Scioto Analysis is not to grow. Our mission as an organization is to improve the quality of public policy analysis at the state and local level. To do that, we partner with state and local governments, university centers, and mission-driven organizations interested in bringing better public policy analysis to pressing questions at the state and local level.

When I started this organization in 2018, I was renting out my apartment with Airbnb and trying to get anyone to listen to the story I was trying to tell: that public policy analysis at the state and local level can be better. That we can think of the economy in a more comprehensive way. And that better information will lead to better public policy.

People have been listening. They have been listening so much that in 2022, I needed to hire Michael Hartnett, a statistician out of the Twin Cities, to join our team as our first employee. Over the past two years, he has allowed us to take on more clients, has led studies on water quality, recreational marijuana legalization, Ohio’s economy, and poverty in the state, and has managed our newsletter and economic experts panel.

The need for better public policy analysis keeps growing, though. Over the past year, we conducted studies on poverty in Ohio, subjective well-being, a $15 minimum wage, lead service line replacement, benchmarking metropolitan areas, poverty in Franklin County, and immigrants and new Americans in Central Ohio.

Every week I am having more conversations with people who think we need better public policy analysis to help inform key state and local issues. If we want to do the work that needs to be done, we need more help.

We are looking for a policy analyst to help us analyze public policy in the tax and budget, social safety net, and energy and environment sectors. This will be a full-time role for someone with strong quantitative and writing skills who cares about making public policy analysis better. This is a good position for people who are early in their public policy careers but we are also open to people who already have some experience with public policy analysis.

We are also looking for a policy analysis intern this summer to help us conduct a cost-benefit analysis on a pressing public policy issue. This is a good position for someone who is still in school and wants to dip their toes into the policy world.

If you know anyone who would be a good fit for either of these positions, please forward this page to them. And if you are interested, send your resume to michael@sciotoanalysis.com. We are excited to talk with you.

We need better minimum wage data

Earlier this month, I wrote about what it looks like when we adjust minimum wages for costs of living across the country. My initial plan was to write a follow up blog post where I looked at publicly available data and estimated the number of people who are paid below their state’s minimum wage. I’m writing this blog post instead though because I wasn’t able to make these estimates. The quality of the data just wasn’t there. 

This is one of those questions that seems like something we should have lots of data on, but we for some reason don’t. You’d think with how accessible wage data is for all different industries that someone would have figured out how to estimate the number of people earning their states minimum wage, but that just isn’t the case.

It is possible to find estimates of the number of people below the federal minimum wage line. The Bureau of Labor Statistics publishes statistics on federal minimum wage workers each year, but they don’t make any estimates for the number of people who fall below their effective minimum wage.

There are a lot of reasons why trying to move past the federal minimum wage is an extremely difficult task. The most important one is determining what a survey respondent’s effective minimum wage is. The Current Population Survey records where its respondents live, which does not have to be the same as where they work. 

If someone crosses state lines for their job, or even if they work in a different city that has its own minimum wage, they might be exposed to a different minimum wage than if they worked exactly where they lived. 

This becomes a larger issue when we are trying to use Current Population Survey data which can have sample size problems when trying to get high geographic resolution. Even looking at state level data, there can be significant sampling errors. We can adjust for these with survey weights which the Current Population Survey has, but those can only go so far. If the survey doesn’t reach enough people with low-wages, we won’t be able to accurately estimate the number of people at the minimum wage.

This is a pretty significant gap in the data. Knowing precisely how many people are experiencing the minimum wage is a really valuable piece of information that could really help drive policy decisions. This is because when we talk about making a change to the minimum wage, the really critical piece of information we need to know is how many people are actually going to be impacted by this change. 

To bridge this data gap, we need more robust data collection methods. Enhanced collaboration between state and federal statistical agencies, along with surveys specifically designed to capture this data could help provide a clearer picture of the labor market. This would empower policymakers with the detailed information necessary to craft legislation that truly reflects the economic realities faced by workers across different regions.

While my original goal of estimating the number of people earning below their state’s minimum wage was hampered by data limitations, this experience highlights the importance of quality data in policymaking. Understanding the true impact of minimum wage laws requires comprehensive, accurate data that captures the complexities of the labor market.