I’ve been thinking a lot about AI models and suggestion algorithms recently, and how public policy relates to them. Specifically, I’ve been thinking about how these things change the way we think.
I believe there is a public policy problem that we need to address: as suggestion algorithms and AI models become more ubiquitous, they are replacing more types of human decision making. Some of this is more trivial, such as suggestions about what movies to watch or what music to listen to. This becomes more concerning when these models make more important decisions like what news to read, or what information we should take away from a dataset.
This is a public welfare issue because the type of suggestions these models make are impacted by the data that they are trained on and the specific objective they are trained to achieve.
Take the example of a social media suggestion algorithm that is designed to keep the user engaged with the product. This algorithm is going to make individuals engage more with social media than they might optimally like to, which creates a negative internality.
This idea relies on an assumption that some people might disagree with: that these models prevent people from making rational decisions. You could argue that a rational economic actor can take in the suggestions given by these models and decide what the appropriate amount of consumption is. I think there is sufficient evidence to suggest that people are not behaving rationally when faced with suggestions from statistical models, but I’ll acknowledge that it may be possible to come up with a reasonable counterargument.
If we accept that suggestion algorithms and large language AI models create negative internalities by guiding people towards decisions that are not in their best interests, then we can try and figure out what we can do about it.
The economic solution I find the most interesting would be to tax the consumption of products that rely on these types of models. We’ve written in the past about Pigouvian Taxes, and solving this type of market failure is exactly what they could be good for.
One challenge of this approach with these technologies specifically is that most people aren’t paying to use these services. Policymakers would have to come up with some new ways of quantifying use and taxing it accordingly.
So far, I’ve been talking about suggestion algorithms and massive AI models interchangeably. This is because the real problem I think policymakers need to address is the fact that human decision making is being increasingly replaced by technological decision making. I’ve so far only mentioned cases where this leads to negative internalities, but there are many cases where these exact same models are making massive improvements for society, like helping identify cancer. I don’t know what the solution to this is.
Despite this identification challenge, policymakers are going to need to start addressing these models head on. Like it or not, they are changing the way our world works rapidly, and it is extremely important that we come together and decide how we want to interact with these new kinds of models.