interview conducted September 2025, edited for clarity
by Jen “Lil’ Bit” Schleusner
Artificial intelligence is shaping the way we live, work, and interact, but the laws and policies surrounding it are still catching up. To better understand how regulation is evolving—and how it affects our everyday lives—I sat down with Tatiana Rice, Senior Director for U.S. Legislation at the Future of Privacy Forum, where she specializes in AI legislation and emerging technology policy.
In this conversation, Tatiana shares her journey from a small-town Midwestern upbringing to becoming a leader in AI policy, the misconceptions she sees around artificial intelligence, and why slowing down development may be the most important goal for the future.
I’m here with Tatiana Rice, Senior Director of U.S. Legislation at the Future of Privacy Forum, where she specializes in AI legislation and regulation.
Yes, my specialty is AI legislation regulation. I now am our senior director of the team, so now I am covering a lot of different topics from AI to youth, privacy and safety – to connected vehicles and biometrics.
My organization focuses on a lot of different types of technology policy issues. Obviously our bread-and-butter is data, privacy and AI ethics, but usually that ends up, you know, broadened, over time, as more technologies and emerging technologies are introduced.
Getting into AI Policy
When you were growing up thinking of what you wanted to do, did you ever imagine you’d be working in AI regulation?
Oh gosh, there’s no way I would have ever thought I would be focusing on any kind of regulation!
I’m actually from a very teeny, tiny small town in the Midwest – I grew up as one of the only people of color, with a single mom. I didn’t really know being a lawyer was an option to me – I didn’t really think about it much.
I did go to college, and I majored in criminology, and that’s where I kind of first saw the intersection of law and society and how a lot of the criminal justice issues in particular couldn’t really be changed or addressed – except through either being a lawyer or being through a legislator. So I ended up taking a career aptitude test that said, “Hey, you’d be a good lawyer” – that validated me and I ended up going to law school – not really knowing the kind of broad scope of what being a lawyer meant. Most of my intersection of it was with criminal law. Over time, I ended up doing a stint at the Department of Justice, where there they were covering a Supreme Court case called Carpenter, which had to do with cell phone location data and the fourth amendment search and seizure right – whether they had the right to that kind of data
Like the government asking Apple or other companies to hand over the phone
Yeah, so that was the first time they had to decide this really critical issue – the fourth amendment, which was drafted in like the 1700s and then you have this newer technology and how do these things intersect and interact with each other? And from that instance, I was just so fascinated with that concept of intersecting law and technology and especially as we are essentially building our society around digital infrastructure. at this point, those questions are more and more important.
My focus started there in data privacy at a law firm – practicing there for a few years focused on data privacy biometrics, and then I went into the policy space so I could feel like made more of a public interest impact
Common Misconceptions About AI
Since you’ve worked both as a lawyer and now in policy, what misconceptions do you see most often around AI legislation?
Honestly, the biggest one is that people don’t know what “AI” actually means. It’s kind of this amorphous concept, both in Scientific and Computer science terms and in terms of law and technology and policy area. People will just use the term “artificial intelligence” to describe technologies that we’ve actually had for a really long time, right? So they might actually just be talking about general automation. Or they might be talking about just general data practices
Or even special effects!
That’s right. They don’t conceptualize that there is a difference in AI, let alone distinguishing between things like machine learning and neural networks and different ways of how you train an AI system and the capability of it. There are some that are far more sophisticated on that side, but usually they come from a technical background – but that’s usually one of the most common things we see.
They might be trying to address a particular issue that they are thinking about – and believe that it’s AI legislation, but it might not be. It actually might be a data privacy concern – that you do through data privacy legislation. I think many people are just trying to react to the moment of AI right now.
Helping Policymakers Focus
So how do you help policymakers focus in the right direction?
We’re actually a non-advocacy organization. Our role is purely educational when working with lawmakers and regulators – we’re helping them understand how these technologies work.
What are the risks? What are the benefits? And then engaging in a dialogue – we’re not here with an agenda. What are you trying to solve for and helping them understand. For example, we’ve seen this approach taken in California, and this is how they did this right, or this is the approach that they took in the EU. The approach that you’re taking might have XYZ unintended consequences – you may want to think about narrowing the scope of artificial intelligence to chatbots – that’s been a common concern, right? Just helping them make more informed decisions, essentially.
What Surprises People About AI Policy
Behind all the policy talk, what’s something people would be surprised to find interesting about AI legislation?
One thing I feel really strongly about is how partisan the AI conversation has become. Tech policy used to be fairly bipartisan, but now AI is often caught up in political battles. These decisions are things that are going to impact our society, our communities, our lives for the future. How are schools using AI? How is your healthcare provider using AI?
What is the risk of bias or discrimination in the AI that is being used? Who’s the money behind it? Those are the things that are driving the conversation right now – as opposed to more of these community oriented questions and focuses. and people just think AI. I think people assume AI is this distant, futuristic thing – and that’s almost like an intentional tactic that’s being taken.
A lot of the focus in AI legislation regulation is often being shaped by policymakers – who obviously have their constituents in mind – but they also are being lobbied. They have other types of interests, venture capitalists, etc. Its unfortunate, but I feel so strongly about it and the work I do and how it is going to impact our society.
Unexpected Uses of AI
Speaking of daily life, what’s the most unexpected place you’ve seen AI pop up?
One that maybe I should have known, but I’ve seen a lot of and a lot of media attention around, is as schools are starting to get defunded more, especially around things like counselors and mental health, they’re interchanging their counselors for chatbots and AI systems.
The software in itself often times, isn’t set up specifically for children’s mental health, let alone whether a clinician or licensed professional actually developed the application or had any input into it. Also the way that the contractors are drafted with the school – the school unassumingly actually ends up taking on most of the liability. The school ends up being liable, not the technology provider. So the way that law and regulation is going right now, is trying to figure out what is the accountability or responsibility chain. That is a big focus area, and something I didn’t even think of going into this year. It is not something I would have ever thought of, but I see it coming up over and over and over again.
Illinois actually did just pass, and the governor just signed a bill – that bans AI therapy bots, essentially in the state. That is a large step in the right direction.
AI and Civil Rights
For readers without a legal background, how would you explain the civil rights risks tied to AI?
One of the clearest examples is in employment technology – the hiring process. Under the Civil Rights Act, everyone has the right not to be discriminated on based on race, ethnicity, disability, gender, etc.
I want to say about 90% of Fortune 500 companies use some form of AI in employment screening. Which means that they automate their resume screeners. If those systems are not representative of the population and aren’t tested for bias, they can discriminate against candidates.
For instance, if an AI system doesn’t recognize HBCUs as accredited universities, graduates might be screened out of jobs unfairly. You might be just automatically denied or screened out of the application process – just because that AI system wasn’t properly trained to recognize your candidacy. That happens for example, with black software engineers and can especially happen in the disability community – where there’s an increased amount of computer games that are being utilized to be able to test for certain traits or characteristics. From a consumer standpoint, you don’t actually even know that an AI system might be being used.
That’s a civil rights violation in practice, but because it happens behind the scenes in an algorithm, applicants often don’t even know.
AI in Pop Culture
We’re here at Dragon Con—what drew you to the con?
I actually came for the first time a few years ago – my old boss used to come here all of the time and raved about it. She used to work at an organization that was kind of affiliated with the Electronic Frontiers Foundation – so I was able to come and it was just so much fun.
Most of the time when we are talking to people as more the educational nonprofit that we are – we’re not always people facing – because we’re not an advocacy group. So it’s really fun to be able to come to this kind of con and genuinely see and hear from people about what they’re thinking about. What type of questions that they ask during open Q&A – those are actually probably my favorite – because I get an actual gauge of what people are thinking about or concerned about.
Also just from a nerd perspective, this is amazing. Again coming from like the Midwest, where there just isn’t a lot of diversity – coming to a place like this, where there’s so much diversity of nerds, it’s amazing. I love it.
What’s the most over-the-top or inaccurate portrayal of AI in pop culture?
If you asked me a couple years ago, I’d say Ultron in Avengers. But now, with how quickly AI is advancing, it’s harder to laugh off those scenarios. We’re nowhere near “Skynet” levels—but the pace of development makes you wonder how far off it might really be.
The Golden Rule for AI Developers
If AI is able to make start making actual decisions for humanity, what do you think it should prioritize? If our robot overlords are gonna come..
Probably where one of the more optimistic areas where AI development could help us as humanity is climate change.
There’s a lot of really good data to showcase what kind of climate change avenues can be taken, and most people specialize in one area or another, but don’t have a broad swath of knowledge about everything – where an AI system could take in all that knowledge and provide recommendations or take actions to do that – so that would be something I would say, would be beneficial
If you could write one golden rule for AI developers, what would it be?
Move slowly and purposefully.
I think that there is oftentimes, especially right now, the move towards trying to win the AI race. That all the risks and ideas of whether AI should even be used in a particular context are not even considered. I think a more purposeful movement in our development will actually be far more beneficial for AI’s contribution to our society than anything else.
From hiring practices to mental health in schools, AI is already affecting our lives in ways most people don’t see. Tatiana’s advice—move slowly and purposefully—reminds us that the choices being made now will shape the technology’s impact for generations.