Regulating AI: Deleted Scenes and Extras
Congress is a reactive institution, sure, but what else?
I was quoted in a recent New York Times piece about regulating artificial intelligence. The writer initially contacted my by e-mail with a few questions. I ended up sending two long e-mail responses to and spoke on the phone with the writer for about 45 minutes. Behold my contribution: “Generally speaking, Congress is a more reactive institution.”
I used to work at Congressional Quarterly. I get how quotes get edited down, especially in a piece like this that calls on so many experts. A lot of people are quoted in the article, and it’s a good piece! But the question the article asks, “when will Congress regulate AI,” with its implied “why is it taking so long,” has a few additional answers that I wanted to highlight.
First is that it’s always useful to look at who in government, in this case Congress, would be doing something about it. Who has authority and jurisdiction. And for most tech issues (depending on how they’re defined), that’s the House Science and Technology and Senate Commerce, Science, and Technology Committees. And those committees aren’t particularly desirable among legislators. Back in the 1990s political scientists Tim Groseclose and Charles Stewart developed scores and rankings based on transfers on and off committees; essentially, which committees did legislators want to leave and which committees did they want to join. Legislators in both chambers mostly left the science and technology committees when they could. These are not prestigious panels.
Which is not to say that the legislators who sit on those committees are bad at their job. It does mean, however, that a lot of legislators who oversee science and technology would rather be doing something else. They probably devote more attention to their other committee assignments. The reasons for these committees’ relatively low standing are a bit complex, but one factor is that you can make more hay with your constituents and colleagues by saying you’re holding the FBI or the State Department or the Treasury Department to account than you can from saying you’re keeping the National Institute of Standards and Technology in line.
Second, Congressional responses to AI or other emerging technologies also aren’t just about Congress. Legislators get a lot of the attention and blame when it comes to slow responses to emerging problems, but the bureaucracy and the courts should get attention, too. Federal agencies are a key information source for legislators to help them understand the nature of the problems they’re trying to address. Just about every federal agency uses AI in some form; per a Trump executive order they’re required to annually report their use cases. It’s possible that cataloging all the ways agencies are using AI models confuses the issue even further: who’s responsible for overseeing all those use cases?
On the other side we have the courts, and we just don’t have a lot of evidence that many judges keep up with emerging technologies anywhere near as well as Congress does. Even if they do understand emerging technologies, judges have to be willing to extend statutory principles to technologies not explicitly written into the law. A recent Brooking Institution article highlights how a recent D.C. Circuit Court opinion on the EPA’s authority under the American Innovation and Manufacturing Act notes, “nowhere does the Act say anything about QR codes.“ Requiring Congress to explicitly list which technologies are covered makes it hard for the legislature to a.) act quickly, and b.) trust that it can move on to address other issues.
Third, the models and uses we’re currently calling “artificial intelligence” aren’t really new. “AI” at this point is a combination of logistic regression and factor analysis; in the case of large language models we’re sprinkling in some text analysis methods. At the very least, Common was shouting “A. I.” at us four years ago. What’s new is some combination of: companies being able to scrape a ton of data from the Internet; applying these tools to things like sound and image (instead of predicting which customers are likely to buy a printer if you show them a printer in the “other customers bought” list, you’re predicting which pixels go where to make a picture of Glenn Danzig getting knocked out by George McFly or whatever); and media attention to what AI is and does, spurred on by the new consumer applications (specifically generative AI).
Fourth, on the media point: if you look at “technology” media reporting, it’s as much or more about product announcements as it is about how these technologies work and how the industry works. Media coverage of AI falls into that pattern. “AI is changing the world” isn’t that different from “check out the new iPhone features” or “Threads is the Twitter Killer.” It’s not all that helpful for communicating to the public (and indirectly, policymakers) what the relevant policy questions are.
Fifth, and back to Congress: they’re a reactive institution in part because they have to wait and see which technologies really matter. Will AI change the way society operates? Maybe. Could also be a fad. Even though AI isn’t a new technology, the current scope of deployment across economic and social sectors is still early enough in its trajectory that many futures are possible, including one in which AI-based products go the way of Betamax, laserdiscs, and NFTs. Congress could wait to long to act, but we also could end up with a “policy bubble” of long-term overinvestment well past the point of achieving whatever the government’s goal would be.
I don’t know which future is the most likely, but there’s also little value in Congress using time and other resources—that could be spent on, say, the Western US water crisis—passing a bunch of laws to address a technology from which industry, consumers, and the media may move on from a year from now.