Silicon Valley Is Spending Millions to Stop One of Its Own

4 hours ago 1

Would you vote a former Palantir employee into Congress? Maybe your first instinct is no. But what if you knew that a super PAC, funded by some of Silicon Valley’s wealthiest and most powerful people, including Palantir’s own cofounder, was in heated agreement with you?

I’m talking about New York Assembly member Alex Bores, a Democrat running for Congress in a crowded primary that also includes Kennedy scion and chronically online influencer Jack Schlossberg, TV commentator George Conway, and New York assemblyman Micah Lasher.

Bores, 35, has a master’s degree in computer science and worked in Big Tech—at Palantir, specifically—before turning to politics and winning a 2022 New York state assembly race. But while Bores’ background is in tech, that doesn’t mean he supports how the industry is doing its job. Bores is a vocal proponent of rigorous AI regulation and cosponsored New York’s RAISE Act, which became law in 2025 and requires major AI firms to implement and publish safety protocols for their models, among other guardrails.

Bores’ AI stance has made him a target for some of Big Tech’s leaders: In late 2025, a super PAC called Leading the Future—bankrolled by OpenAI’s Greg Brockman, Palantir cofounder Joe Lonsdale, and VC firm Andreessen Horowitz, among others—launched an aggressive campaign to thwart Bores’ primary run. In particular, the group takes issue with Bores’ regulatory approach to the AI industry, which they described as “ideological and politically motivated legislation that would handcuff not only New York’s, but the entire country’s, ability to lead on AI jobs and innovation,” in a previous statement to WIRED.

I sat down with Bores in early April, about 10 weeks before what’s presumably a decisive primary (New York’s 12th District consistently votes blue). We talked about that Palantir gig, why so few lawmakers seem to understand the tech they’re supposed to regulate, and how it feels to be on the receiving end of PAC-funded attack leaflets and text messages … about yourself.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Welcome, Alex.

ALEX BORES: Thanks for having me.

I want to start with your tech background. It was fascinating to me that you worked at Palantir. At WIRED we’ve covered Palantir a lot, and one of our reporters had a very smart idea a few months ago to write a story about what the company actually does. Because a lot of people don’t really know or understand. The best part of that story, for me, was that some former employees of Palantir actually could not explain what it is or does. So I have to ask you, as a former employee of Palantir, what is your best explanation as to what Palantir actually does?

Palantir helps organizations make use of data they already have access to, by making it easier to track changes to that data over time, by making it quicker to integrate that data, and by putting what's called an ontology, an opinion of how the data should be structured, on top of the data itself.

So the best explanation of the ontology is actually from a project that I did at Palantir with the Department of Justice, where we were looking at the role of big banks in the Great Recession. We wanted to see if banks knew that the loans they were putting into their securities were not up to snuff. That they were below the standards.

An easy way to prove that would be if you saw a pattern of a loan being added to a security then being pulled out before it was issued and then put into another one that had the same standards. That would show, OK, there was some knowledge by the bank that there was a problem with that particular loan.

The problem is that e-discovery software was made to just help lawyers read documents. So theoretically all the data is there. You have Excel sheets with each individual loan tape, but it’s being presented to lawyers as “just read it,” and you can’t, as a human being, read thousands of loans and track it, tape to tape.

Sure. Of course.

We realized that the important piece of information was the loan itself, that was an object that should be tracked. That's what an ontology is helping you do. So we built a system that let you track individual loans, search for loans, moving from tape to tape, and found numerous examples of that exact pattern: Banks realizing there was a flaw, pulling it out of a security, and then sneaking it into another one later. Because we found so many of those patterns, we were able to recover $20 billion for taxpayers from settlements with the banks.

What was exciting to you about the idea of working at Palantir to begin with?

I have a master's in computer science, but that actually came after starting this work, so my undergrad was in industrial and labor relations.

I grew up on the picket line with my dad. I studied labor unions in undergrad, I led a campaign against Nike for laying off 1,800 workers without giving them legally mandated severance pay. And we ended up winning that campaign.

But during that process, another student turned to me and said, “Why do you care so much about these jobs? They're just gonna get automated anyway.” That really stuck with me. We need to find a way to have tech work for us and not the other way around. Beyond that, I'm a Democrat. I believe a government can and should be a force for good, but that also means we take on the burden of proving it.

I was searching for places where I could actually help the government deliver on its promises, help it serve people, and also figure out how we can have tech actually working for people and not against us.

Palantir is notorious, particularly at this moment, for some work it does with the government that it is not celebrated for. Specifically, I'm talking about the so-called Department of War. I will call it the Department of Defense.

As will I.

I want to talk more about your decision to resign from Palantir. You've said you decided to leave the company when it signed an ICE contract during Trump's first administration. Palantir, though, has a long track record of working with ICE, I think going back to 2011, so walk me through that moment for you. What was the moment where you sort of said, “I can't do this anymore”?

To be clear, I was never a part of that contract. But Palantir had started work with a division within ICE called Homeland Security Investigations during the Obama administration. It focused on drug trafficking, human trafficking, some counterfeiting work—work that's not controversial, that everyone would support. When Trump came in and took office in 2017, he tried to change the nature of the work everywhere. That includes the work at the Department of Justice where they tried to make us work on civil immigration matters.

I, as the lead of the project, said no. I had the power to do that because our contract with the DOJ was structured into three mutually agreed upon case types. So you could structure the contract in a way that said, “We're not gonna do that work.” Then, at ICE, the executives had a different calculation. But Trump started pushing for other divisions within ICE, in particular enforcement and removal operations, to get access to the software and to use it for deportations.

But then there was a question of “Will you put in contractual guardrails that say, ‘Yeah, it won't be used for deportation.’ The same way I had them at the DOJ?” Executives made clear to us that they were not going to do that, that their plan was to renew the contract without any of those guardrails, and that's when I made the plan to leave.

There were many things with your background that you could have done. You could have gotten another lucrative job in tech. Why politics?

Government, as I said, had always been part of the appeal, which didn't necessarily mean politics. But making government work is core to what I've done my entire career. When I left Palantir, I went to a startup that did anti-money-laundering work, counter-terrorist-financing work. From there, I went to another startup that helped municipalities and states distribute aid during Covid.

The seat that I'm currently occupying in the state assembly opened up and I had a lot of conversations with friends, one of whom said, “You know, you're always talking about how you are downstream of bad policy, trying to fix it with tech. Here's your chance to go upstream and design it right the first time. You don't know if you're gonna win, but if you do win and it's awful and it's just mud-slinging and it's what everyone thinks of as politics and you can't be effective, in two years, in four years, you quit, you go back to what you’re doing now. But you can’t in two to four years say, ‘Now I’m going to run for the open seat.’ This is a moment in time.”

That's a good friend. That's good advice.

Very good friend. So I ran, I won, and I found it to be even better than expected.

Why?

Because you actually can get things done if you put your head down and you ignore the noise. If you’re determined and build coalitions, eventually they run outta reasons not to do your bill. I've passed 30 bills in my time in the state legislature, which is about the same amount Congress as a whole passed in 2023. I was named by the Center for Effective Lawmaking, which is a nonprofit that ranks congressmembers and state legislators throughout the country, as the most effective new legislator from New York City. I've been able to really get some things done, and help to improve the lives of my neighbors.

Your background is particularly interesting in the context of politics. You said, I think it was in 2022, “one person in Albany should know how tech works.” You presumably know a lot more about technology than your average lawmaker. But why don't more lawmakers understand technology? Why don't they understand the companies who are creating and commercializing these tools, these platforms? I don't want to be so simplistic as to say it's an age thing. I don't think it's an age thing. You can understand technology no matter how old or young you are.

A hundred percent. You can understand it no matter how old you are, and you can not understand it even if you're young.

I mean, we have a Congress that is dominated by lawyers, and I love my friends who are lawyers, but you want to have a diversity of backgrounds in office, and maybe the skill set of software engineers and the skill set of Congress has less overlap than the skill set of lawyers and Congress. You need people that play in a few different arenas, but it's also something that's new and moving fast.

While I was working, I got a master's in computer science with a specialization in machine learning. So when I was elected in 2022, I became the first Democrat elected in New York at any level with a degree in computer science.

I will be only the second Democrat in Congress with a degree in computer science. There are two Republicans who are there, but out of 435 members …

I mean, that is shocking.

It feels like having less than 1 percent of your congressional representatives for something moving so fast and so important is probably not the right balance.

I want to talk to you about AI, about regulation. Several states have stepped in with their own laws, including in New York. You spearheaded the Responsible AI Safety and Education, or RAISE, Act. In a nutshell, it requires major AI developers to publish safety testing practices. Can you explain how that works? What exactly does that mean?

This applies only to the very largest AI developers. They have to have a certain complexity threshold of the models and a revenue threshold. They need to be making $500 million a year in revenue. So at this point we're talking about OpenAI, Anthropic, Google, and Meta.

They seem like important ones.

What it requires is that they have a safety plan that they make public and actually stick to. You can amend it over time, but you have to amend it if you change your practices. You don't get to just ignore your safety plan and then come back and say, “Oh wait, there's an amendment to that.”

They have to disclose critical safety incidents to the government, and that's specifically defined in the bill and is an incredibly high threshold.

It also sets up a government agency within New York to continue to collect data on the development of AI to suggest additional rules and regulations and to annually report to the state legislature on changes in law that they think are needed to make everyone safer.

President Trump signed an executive order last year to go after states that pass laws that aren't consistent with national policy on AI. I'm curious, how would you describe our national policy on AI? I would describe it as nonexistent. I mean, there is no regulation happening in any meaningful capacity at a federal level. How does that executive order intersect with what you're trying to do in New York?

Oh, his executive order was directly targeting my RAISE Act.

Oh, fun. OK.

It was mine and other bills like it throughout the country, like California’s SB 53. But it was very much designed to convince New York and California, and last year there were about seven states that were working on frontier AI laws to not pass those laws, to not implement them.

They just tried to punish states. It was, If you pass a bill that does something we don't like, we're gonna take away this specific funding. BEAD funding is what they were often pointing out, which is to expand broadband access.

Oh, that’s a classy threat.

It's to hurt people in rural areas that don't have access to the internet. They talked about finding other funding that they wanted to take away and instructed the attorney general to start suing states, which is the first time I’ve heard someone make the argument that more lawsuits will lead to more innovation.

But that was their path forward. It wasn't based on any serious policy. It was a few Trump mega-donors who think there should be no regulation of AI whatsoever getting him to give them a gift. It's those same Trump mega-donors that are now funding Leading the Future and these super PACs coming after me.

Let's talk about those ads. They popped up just around when the RAISE Act was signed into law. These are millions and millions and millions of dollars being spent to prevent candidates like you from getting into higher office. What was your reaction when you first saw those ads?

That they're desperate. That they're making clear that they know they're on the unpopular side of the issue.

They put out these hyperbolic, ridiculous ads because they realized I am their greatest threat for their quest for unbridled control over the American worker, over our education system, over the climate.

I mean, they were making clear to everyone else the stakes of the race. I literally just came from a call with a tenant leader. It was about housing policy. It had nothing to do with AI. This leader said, “I started paying attention to your campaign because of all these ads.”

I mean, this is the funniest part. How much has this been a gift?

They've been wonderful partners in raising up the issue of AI regulation and AI safety.

My plan coming into this race was to talk about tech and AI 5 to 10 percent of the time, because if that's what you care about, you're already voting for me. So let’s talk about health care and my housing policy and transportation and all these other things. But they have made clear that this is a big evolving issue, and so a lot more voters are paying attention to it, but it's certainly not fun.

I am also a voter in the district, so I am on their list. I get their text messages every day.

Alex, you get nasty text messages about yourself?

I do, I do. I get mailers sent to me, and then I don't want to throw them out. I need to know what people are saying about me.

I'll go to my mailbox in my lobby and I'll take out the mailer, and then I'll ride up in my elevator with my neighbors holding this mailer that's saying awful things about me. And it's a surreal moment.

That is next-level self-harm. I could never. You know, if someone's saying mean things about me on social media, that’s one thing. I don't get physical mail about it.

But this is part of their strategy to defeat me, and they have to defeat me for the strategy to work. Right? If I win, it undercuts their threats to everyone else. Part of their strategy is just to intimidate everyone else.

The ways it’s making things harder is maybe less about my electoral race in particular, but talking to other members of the New York congressional delegation, talking to congressmembers elsewhere who are watching this race, I hope they don’t get cold feet. And the super PAC very much hopes they’ll just take a beat on this. Like, “Oh, this seems risky. Let’s let someone else take the lead on that.”

In terms of AI regulation, what do you want to see?

I put out an AI framework about two months ago that had eight subject areas, 43 sub-points. I thought it was important to communicate exactly what I wanted to do. It covered everything from age verification for certain uses for kids to a broad data privacy bill—which we're 20 years behind on, but now have to deal with the fact that AI can de-anonymize previously anonymized data—to regulation around the labor force to catastrophic risks to specific technical standards we could use to defeat the problem of deepfakes. I mean, I got really specific and occasionally nerdy.

Sounds like you used to work in tech.

You put that out into the world and you don’t know how people are going to react. Often in politics, people just put in whatever the most controversial thing is—and I still had 43 of them. But I was really blown away by the reception to it. I had people on the left of the party saying, “Hey, this is what we should be pushing forward and should be our agenda.” Also the chief futurist of OpenAI quote-tweeted it and had “quibbles” but said it was a thought-out plan.

Did they @ OpenAI president Greg Brockman?

I asked them to, but no.

For context, Brockman is funding this PAC that is putting out these ads.

A really surreal part is that Chris Lehane, who is OpenAI's chief policy person, and helped stand up Leading the Future, this super PAC, who is very much pulling the strings on this operation, said in his blog that other states should copy the RAISE Act.

I can't figure that company out. I don't think anyone can. It's a daily mind-fuck to keep up with OpenAI, I have to say.

You know, a lot of the engineers that work there are pro-regulation. It really is the executives at the top, and a few others, that are making just really tough-to-explain decisions.

Let me ask you this: Let's say we fast-forward, you win this election, you look at who's in the White House, President Trump, you look at the level of tech literacy that you would be surrounded by—which is minimal to none, if I'm being blunt. How are you going to get anything done when it comes to AI regulation in the next couple of years? How do you fight that fight and get some wins?

I actually think this is an area where I am most optimistic around bipartisan support. I agree with [Senator] Josh Hawley on basically nothing, except that AI could really use some regulation. I did a talk with Marsha Blackburn, again, someone I don't agree with on much. But for members of Congress, and certainly for normal voters across both parties, survey after survey shows people want there to be reasonable guardrails. Especially around kids, that’s a big focus; labor is a big focus. I actually think we could move forward on a lot of ideas, and you're seeing it play out at the state level, right? My RAISE Act passed with bipartisan majorities in both houses. The final passage was nearly unanimous. I think there was one no vote.

The primary criticism around the idea of regulating AI seems to have to do with innovation and our rivalry with China. The idea that regulation will throttle American innovation, we will lose this race, and yada, yada, yada. How do you respond to that?

I have a lot of thoughts on this. The first is that the CCP is terrified of an LLM saying the wrong thing. China regulates AI so much more strongly than anything that is proposed in the Western world, not just in America. So regulation is not gonna be the reason that we win or lose the race with China.

But I would also point out that many of the people who are making that argument are also the same ones against export controls. If you really believe we need to win the race against China, that would be an additional reason to support export controls. So I often ask about that.

There was a provision in the RAISE Act, the original version, that would cover models created by knowledge distillation, which is the specific technique that mostly Chinese companies have been using to catch up with American companies. Again, these same forces lobbied hard against that and wanted that provision pulled out. So they're not actually being straightforward, they're not actually being pro-American. They're using the argument to increase their profits.

It's a bogeyman.

It's a bogeyman. The last bit I'd say is that often safety and innovation go hand in hand. You think about the biggest capability jumps that we've gotten from AI recently, it's been things that came out of the safety community.

Agentic models were really made possible by chain-of-thought reasoning that came out of the safety community wanting to understand what was going on inside an AI model. And the jump before, reinforcement learning from human feedback, came out of the safety community.

These things are not necessarily in conflict. It's just that the market, in fact, undervalues a lot of the safety aspects.

I try to give people the perspective of, “OK, you wouldn’t use a Chinese AI, right? You wouldn’t trust it. If you’re European, why do you trust American AI?” The AI that's gonna win is going to be what's trustworthy; it's gonna be what's aligned.

If you talk to different members of the AI community, some of them are very much on the existential end of the spectrum. They're worried about global annihilation. You have others who are much more concerned about labor and job loss. When you’re lying in bed at 3 in the morning and you're worrying about AI, is there something that stands out to you that you just feel like we need to address with urgency? What's keeping you up at night?

My 7-month-old is keeping up.

Other than that beautiful baby.

The catastrophic risks are real and need to be managed, and need to be prevented. The environmental impacts are real. When I'm awake at 3 am and it's 'cause of my kid, it'll lean me toward the impacts on kids. But I also think about what will accelerate and make other problems harder to solve. I think the labor impacts could actually make the politics around AI way worse.

I've talked to people that say, “Oh, if we start seeing unemployment grow …” You look at the history of societies that have large spikes in unemployment, especially in young unemployment, especially in young men being unemployed, that's generally not a politics that I like or that I think leads to productive solutions.

And there’s an urgency to it. I feel that more and more every day. Let’s talk a little bit about your race. It's a crowded one, right? You've got Jack Schlossberg, a member of the Kennedy family. You've got George Conway; you've got Micah Lasher, who was endorsed by the outgoing incumbent. How are you looking at your opponents’ campaigns? What sets you apart?

There are two campaigns in this race so far that have raised millions of dollars, plural, with an “s,” and those two campaigns are me and the super PAC coming against me. They really are the big opponent in this race.

When you're thinking broadly about how I stand out versus the field, I'm the only one with extensive private- and public-sector experience. You know that you're gonna get someone that can actually operate in a legislature, because I have and I’ve proven it.

I'm the only one in the field, I believe, that's ever had a security clearance. The only one with that degree in computer science. But I also think in a race where everyone's promising to stand up to Donald Trump, I'm the only one that his mega-donors are spending millions of dollars against.

They don't seem to be too worried about anyone else winning this race. They seem really worried about me winning this race.

You may also be the only one who quit a job over ICE.

I believe so.

I was at JFK Airport last week, surrounded by ICE agents drinking coconut water and eating chips. It was baffling. It was scary. It was intimidating. It was distressing. It was all these things, and I was just getting a flight to San Francisco. I am by no means someone who is really, truly vulnerable to what those agents are capable of and what they have already shown themselves to be willing to do. If you're elected to Congress, how do you deal with that?

I would say one correction on that is, they've shown that every American is vulnerable even if you’re not the target.

Fair, fair, fair. But I'm walking around with a US passport …

You and I have a lot of privilege in that regard, but they're coming after everyone. ICE needs to be abolished. Dismantled and prosecuted. There are crimes that have been committed here, and they should be held responsible to the fullest extent of the law. Not just the agents, that includes the people up the chain who made the orders. The rot of this agency has gotten so deep.

To be clear, borders existed before ICE. ICE is 23 years old. Immigration existed before ICE. This is not saying there should be no immigration system in the US, but this particular agency, which has ballooned to be one of the largest militaries in the world, and whose job is to go around the country scooping up our peaceful neighbors, has no role in a civilized society. The solution is to abolish them.

Trump is not going to sign a bill to abolish them. So we need to be taking every step we can to limit their power, to rein them in, to ensure they are not wearing masks, to ensure that they’re showing identification, to ensure they're banned from sensitive locations, to hold them accountable for any of the misdeeds that we’ve seen, and to ensure that they are not collaborating with our police departments.

I think there's a sentiment among a lot of Americans that basically asks of the Democrats: Why can't you guys do more? Why can't you be more coordinated? Why can't you be louder? Why can't you stop this from happening? I do think that the Democrats have a messaging problem. I think they could stand to turn the volume up significantly. But ultimately, there is actually only so much the Democrats can do right now.

I think both things can be true. I think we can be incredibly frustrated at Democrats in Congress and the party writ large for another strongly worded letter. And also realize that elections have consequences, and there have been some genuinely good moves by Democrats.

I mean, we've had more discharge petitions where the minority basically takes control of the house floor than we've had in 30 years. We've been able to actually legislate from the minority, which has never been seen before. We've seen Trump get out-maneuvered on some of the DHS funding finally.

We saw a real important framing around health care and Republicans’ willingness to let the Obamacare tax credits expire. Now, I think Democrats could have stayed stronger and actually gotten a solution there, but elections have consequences, and it is extremely important that we take back the majority in November.

I want to end with a quick game that we like to play, if you'll indulge me.

Sure!

It's called Control, Alt, Delete. What piece of tech would you love to control? What would you love to alt, so alter or change? And what would you delete? What would you vanquish from the Earth if given the opportunity?

[Laughs] OK. Control, like I am in control of it?

You are God, and you're in control.

Oh, wow. That's a scary thought. I would like to control every platform you use for coding, which is now mostly AI, to do vulnerability testing and cybersecurity checks at the point that it happens. I think we have so many cybersecurity holes, and this is only a bigger problem with AI.

We could have a much more secure internet; it would just mean slowing down basically everything. But if you were to automate that process and build it secure in the first place, that would be the thing that I would do.

I’m picturing you in front of like five computer monitors with like eight arms, controlling all of it.

I think I would need a few more than eight. But again, to be clear, it's not me doing the checks, because it’s setting it up as a default. We're gonna do actual penetration testing. We're gonna do, actually, at-the-point memory testing. We’re going to do all the things you're supposed to do when writing good code. Maybe AI will get there, but it is also introducing its own vulnerabilities.

OK, alt. What are we changing?

We're gonna alt social media.

Please.

The original vision was about following your friends, and it is now just algorithms feeding us whatever will capture our eyeballs. We should go back to that original vision where you were defaulted into just following your friends.

Are you pro-chronological feed?

Yes. I hate that term because it can still be an algorithm on what's most important to show. It doesn't have to literally be chronological. That's why I hate the term. Like, if I haven't logged on for a week and my friend got married, please show me that first, that's important.

You only wanna see the content of the people you chose, correct?

Correct.

Seems very reasonable.

In fact, I carry the bill to do that in the state legislature in New York. I also carry a bill that is law in Utah, and should be national, that would allow you to take your data out of any social media platform and move it into another one—to require interoperability so that the platforms have to compete over you.

I didn't realize that was a thing in Utah.

It was passed last year. And lobbyists are really trying to pull it back. So it's important that other states step up. And to the bipartisan point earlier, it was passed by a Republican-controlled state legislature and signed by a Republican governor. So really a bipartisan issue moving that forward.

Good job.

It's called the Digital Choice Act. I'm a big fan.

Then what would I delete? There's so many real answers here. We've seen some horrifying chatbots that are trying to build sexual relationships with kids. We've seen the nudify apps, which we have taken steps to ban. There's a lot of real things that we need to ban. But what just annoys me, as a more controversial take, is: I can’t stand Slack. I can't do it. I've tried. It breaks me out of the flow-state every time.

Slack is like 80 percent of my life. I have it on my phone. Right after we finish this, I'm going to take my phone off of Do Not Disturb and I'm going to check Slack.

My 3:00 am nightmare is the Slack ping.

That’s a good one. We did an entire story about Slack noises and where they came from. I mean, from Satan, obviously. Would you go back to communicating with people via email? What are we doing here?

Talk in person. Not to be a Luddite. Text is fine. Email's fine. Voice calls are fine. Slack is just this weird middle ground. No one really understands what the communication protocol is. Do I need an immediate response or not? It's undefined. Different platforms are different.

It’s hard to read tone.

It's so hard to read tone. I have lots of friends that work there. I'm sorry. But that's my personal hot take.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “Uncanny Valley.” We’re on Spotify too.

Read Entire Article