• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
DemocracyIran
  • Donate
Podcast Episode

Can This Orb Kill AI Bots?

Nick Pickles battled bots at Twitter and is now chief policy officer at Tools for Humanity, the Sam Altman-founded startup that reads eyeballs with an Orb. 


Link Copied
By Jon Bateman and Nick Pickles
Published on May 1, 2026

Subscribe on

YoutubeSpotifyApple PodcastsOvercastPlayer FM

Invalid video URL

AI-powered bots and slop content are everywhere now, making it harder to tell who’s human. Will the Internet become so bot-ridden that it’s simply unusable? Or will people be forced to prove their identities online—giving up privacy and eroding democracy? 

Nick Pickles battled bots at Twitter and is now chief policy officer at Tools for Humanity, the Sam Altman-founded startup that reads eyeballs with an Orb. On this episode of The World Unpacked, Nick joined host Jon Bateman to explore the brewing authenticity crisis, the geopolitics of privacy, and the surprising legacy of a 1993 New Yorker cartoon. 

Transcript

Note: this is an AI-generated transcript and may contain errors


Jon Bateman: Nick Pickles, welcome to the world unpacked. Right between us is the orb. I'm looking at about a soccer ball-sized, 5, 10 pound futuristic object that is meant to scan my eyeball and prove that I am a human. I have tons of questions about this that we're going to get to. How does this work? Will it work? What are the downsides? But before we get into that, I wanted to just zoom out and ask you about what problem. We're all trying to solve here. I think many people have a sense that there is something deeply wrong with the internet right now, that there's some kind of authenticity crisis, a reality crisis. You don't know who you're communicating with, you don't whether their content is AI generated or human generated. How do you see the problem online?

 

Nick Pickles: I think it's a fundamental challenge to the future of the internet. Bots aren't new. And people using machine learning, people using automated tools to post content, create content, engage online, none of those are new. The difference is if you take that technology and add it to what's coming through AI models, generative AI, the LLM kind of potential agents. That goes from being an irritation to an existential threat to the future of the internet. And so this question of, are you engaging with a human or a machine, becomes something that if we don't solve that, the future the internet, be it for governments, for citizens, for businesses, very well may just break.

 

Jon Bateman: Explain that, explain the stakes. Why and how will this break the online world? What's at stake here?

 

Nick Pickles: I think people have already got a sense, as you say, that just something is changing around them. The number of posts they see that are from accounts and they have a sloppy nature to them, it's AI-generated content. The comments underneath a thread that they're looking at doesn't feel real. The back and forth conversation they have with people is kind of, doesn't feels human. And so I think that question becomes, well, will you continue to use those services in this case of social media? But in the case of other things where you're trying to buy a ticket. Book an appointment for something as simple as a DMV test. The question is, can you even access those services? It's almost an idea that the entire internet becomes so overwhelmed with automated traffic that services are just unusable, whether they're government, whether they are business or social networks. An AI agent has the capabilities to pass those tests and then book the restaurant reservation, book the DMV tests. They could participate in an online poll. They could perhaps create a company, they could perhaps register for citizenship, depending on which country you're talking about. So for me, I think the transition is to go from a point of irritation to a point of services we all use every day either cease to be available because they go back to offline models, or it gets to a place where if people can't access basic government services online, that starts to look like you can't access Government services. And when that happens, you get a political response and a policy response, which is often to seek out more extreme forms of government.

 

Jon Bateman: I want to maybe pull apart two different ideas here, because we've been using the word bot, and you also use the word slop. To me, these call to mind maybe two different facets of this authenticity crisis. One is about who someone is online. Are they who they purport to be, or are they not? And then the slop problem is now we have the content itself being generated via AI. So, a Reddit post or an X post could be... Just low effort AI garbage, even if the person posting that, is themselves human? Are these two problems interrelated? And how do you see that playing out?

 

Nick Pickles: You know, when I was at Twitter, we used to talk about kind of the behavior layer and the content layer. And they're distinct. And if you're trying to tackle content issues that require one set of responses, and behavior level, you know issues require a different set. I think what's what's happening now is they used to be relatively standalone. And now the ability of a bot, let's call it a super bot, to create content. And manage its own account, reply to people, engage, share its content is far more intelligent. And so what that means is there may not even be a human in the loop. This may be a self-fulfilling, self-generating product, posting content to public platforms. There's an incentive financially because a lot of these platforms have revenue sharing. So if you look at YouTube, for example, One of the questions that's come up is AI-generated historical videos are actually pushing down real historians to the point where real historians aren't generating revenue. And the slop creators, whether there's an individual manually prompting those videos or it's just something running itself, they now start to suck up the revenue. And if you scale that out to the whole internet, you have a transition of advertising dollars real revenue. Going from human creators, media organizations, experts to AI-generated content that is being purely pushed out at scale to displace real human creation.

 

Jon Bateman: I guess the other thing that could happen when we're talking about the financial model that keeps the internet going is, whether we like it or not, most of the free content that we get available online is supported by advertising dollars. And the whole premise of that advertising ecosystem is that when an ad serves up on a phone or a computer somewhere, there is some human being looking at it. And whether they're ignoring it, clicking on it, it is... Exposing the advertisement to a person who could be a marketing lead in some sense and that has economic value and then that can sustain free content on the internet. But if more and more of the internet is becoming AI bots, that starts to erode the premise that a click or a view or an impression on the Internet actually represents anything of economic value to an advertiser. Is that a serious threat?

 

Nick Pickles: I think it's a serious threat if you're an ad funded platform and I think if you are an advertiser it's incumbent on you to ask the question of those platforms, how do I know a real human saw this and it isn't a bot. If the majority of traffic on the internet now is crawlers, it's bots, it is automated non-human traffic, well then the logic flows it's highly probable a significant amount of the impressions on those large platforms are coming from non-human sources. So if you're an advertiser and you're paying for that, then I think it becomes a real question of, well, actually, are you getting value for money? And if you care about supporting human creators, artists, journalism, is doing it through the ad-funded model the best way to do that?

 

Jon Bateman: Yeah, polarized views on the goodness or badness of ad-funded models. I would just say it's a reality online that a lot of the content we read and watch and enjoy is ad-funded. And so if that whole ecosystem were to collapse, because one could no longer assume that advertisements were being viewed by humans, but instead it could just be a bot just passing by, I think people would experience the effects of that. So... What have you seen in terms of the trajectory here? You mentioned you spent 10 years at Twitter. You were part of a platform on the receiving end of these bots. Platforms know that there are bots there. There's a lot of efforts to find and combat them. How good a job are they doing and who has been winning this battle over time?

 

Nick Pickles: It was always an arms race. I think the honest thing, we would come up with innovation to find a way and then a group, an individual, would find a around that and you would try new deployments. The question of how sustainable that is when AI decreases and the newer technologies decrease the cost of creating fake accounts to near zero at scale. That's an order of magnitude change from where it was even five years ago. And how you tackle that problem long-term, I think is an existential threat to any platform that has essentially a free user experience, be it e-commerce, social, gaming, dating, all of those platforms are now in a pretty existential challenge for authenticity.

 

Jon Bateman: Yeah, yeah. And I think that is one of the reasons why Eagle and Musk did implement his $8 a month fee for getting a blue check. I mean, there were other reasons, too, revenue. But his claim was this will help keep the bots at bay. Now, I will say my own experience of the ex-platform is that AI slop is dramatically on the rise. It's now less clear to me that what I'm reading is any kind of informed perspective versus something that. Was just cranked out by someone who was looking to maximize engagement.

 

Nick Pickles: So I use the following feed every day.

 

Jon Bateman: Yeah, just to limit what you're seeing to people that you have chosen to follow.

 

Nick Pickles: And that's, and I have done that for a long time. I was at the company when we introduced that, I think the first company to introduce the ability to kind of just see a reverse chronological feed. I think that the other example is that in the past 10 years you've seen the emergence of algorithmic content recommendation based on the content itself. Whereas the question of who to follow on a social media platform was often driven by the author. Now you look at TikTok and it's like all there is pumping content, it's not pumping people. And so for me, I still get way more value from an account, from a platform like X where I can go and follow individual authors who I want to hear diverse opinions, as opposed to platforms that, you know, last summer I was heavily consuming content of the Oasis I don't want another kind of, you know, a hundred Oasis videos in my feed. And that's what I think some platforms, that's some, the model of some platforms is to give you more of what you are consuming. Whereas for me, an author based model still allows you.

 

Jon Bateman: Yeah, I mean, bigger lesson here. The rise of these false personas in AI Slop is now forcing you, the user, to constrain your behavior on the platform in order to kind of confine yourself to a more trusted space. I will just say, for me, that doesn't entirely solve the problem. We've got this big war going on in Iran right now. I have been following some of the most credible commentators, professors, people who have written books on issues related to the war. I think some of them are using AI to write their tweets. What it creates is just a seed of doubt in my mind about whether what I'm reading stemmed from the kind of intellectual process that I'm used to that author providing or some whole new process which is like more circumscribed. Do you have that experience?

 

Nick Pickles: If you zoom out. And you look at kind of the mindset that people now have, and this is partly answering your question, partly a bigger point. When I first started using the internet, my mindset was default trust. I assumed I was talking to a person, I assumed that person had good intentions, I assumed so I could share the computer game I was playing, the music I was listening to, the gig I was going to without kind of concern. Now I think people who are using the Internet come at it from a point of default distrust. You assume what you are seeing is not real. You assume the person you are speaking to does not have good intentions and may themselves not be who they say they are, maybe altogether not human. And that mindset shift is pretty profound and I think changes the way that society interacts with technology at a really foundational level.

 

Jon Bateman: I just think it introduces a lot of costs, right? I mean, doubt and insecurity is a huge tax on transactions or interactions that require trust, right. I mean if I'm in a highly governed space, I have a lot assumptions that I can make about the people around me, about the kind of vetting that they've been through. Whereas if I'm in, you know, the mall or on a street corner, I can't make those assumptions about people and I have to just engage with those people in a radically different way. It seems to me that the internet is becoming more like this ungoverned street corner and less and less like. You know, an airport.

 

Nick Pickles: Or an office. I'm not sure it's ever been as secure as an airport. I think it was built by amazing people who, when a community is inherently smaller, generally speaking, the challenges and issues that it faces come with scale. The bigger some of these platforms got, but also just the more people got connected to the internet, the more you brought bad actors into those spaces. And so I think that's a function in part of just the internet is now pretty much interwoven across all of society as opposed to kind of Just some bubbles here and there. People are more wary of being scammed. And so how does that change their behavior? Well, interestingly, one of the things that they do is they look for things they then trust, which tends to be big brands. So actually, what you have is that people, out of fear of being scanned, out of fear of buying something that isn't going to work, will change their behavior and go to the bigger brands, which is really bad news if you're trying to start a small business. So... Even though you may run an awesome shop, have a great online presence, just because you're not as well known as some household brands, people are changing their behavior away from spending money with you.

 

Jon Bateman: I relate to that. I think that's tied to my increasing uses of Amazon, for example. Now, Amazon has its own problems with counterfeit goods and the like. But I know that at least that platform, I kind of know what to expect from them, and I know what kind of recourse I might have if I were scammed on that platform. Versus if I find the same item, maybe for a better price or faster shipping on another site that I haven't used of before. There's that seed of doubt of what kind of risks I could be taking on. You mentioned the history of this. People may remember this famous New Yorker cartoon on the internet. Nobody knows you're a dog. Maybe I'm dating myself, and the Zoomers out there will have no idea what I'm talking about, but this is.

 

Nick Pickles: The New Yorker's print magazine comes in paper.

 

Jon Bateman: Right, like what is the New Yorker? Cartoons are things people control. Yeah, what is a cartoon? It's a meme, you know, with a pencil. Yeah, it's a mean, yeah, exactly. Like, it a meme that's static and there's no anime elements or wife-jack characters. But this was a classic New Yorkers cartoon. It got put on coffee mugs, t-shirts. This was from 1993. You know, another blast from the past, the movie You've Got Mail. With Tom Hanks and Meg Ryan, that cartoon and that movie had a kind of lightness to it. There was a kind frivolous sort of tongue-in-cheek idea around lack of authenticity on the internet. Actually, it could even be seen as liberating. You could try on a new identity. You could shed the assumptions that people are making based on your age, race, sex, disability, what have you. Now I sort of feel like the mood is more one of doom, depression.

 

Nick Pickles: You know, you hear people talk of the dead internet theory, there's the shit internet theory, truth is probably somewhere between the two, I mean if you can't use something because it's full of garbage, isn't that the same as it not functioning?

 

Jon Bateman: We're somewhere between shit and dead, that's the decision that we're facing.

 

Nick Pickles: We were going for optimistic right? No I think the the thing actually to go back to the cartoon that anonymity, pseudonymity was incredibly important to the evolution and growth of the internet and I think it's foundational to the way that in a free society the internet has enabled, greater speech, greater activism, greater participation. And there is a choice about the kind of society you build and the kind technology you build. And the idea that you don't have to tell people who you are every time you want to use an online service is so important, but I would actually say is being either consciously or unconsciously kind of eroded as we start to tackle some of these problems.

 

Jon Bateman: Yeah, I'm so glad that you brought that up, Nick, because we don't want perfect identification and accountability on the internet, actually. We want something in the middle. I mean, there are places in the world, I think China has something like this, where to use most significant online services, you need to provide a government ID. So you could say that everything that we've been talking about, this crisis of authenticity, crisis of trust. That these are solved or solvable problems in an authoritarian society where you're willing to totally give up your privacy and just expose yourself to government or corporate tracking fully, right? But I don't think we wanna do that here. And so what is the right level of anonymity, pseudonymity that we want in a free society? How do we even describe that middle ground that we're looking for.

 

Nick Pickles: So I think one of the basic principles is we should only have to share the amount of information necessary to answer the question that we're being asked and sharing a binary. So are you over 18? The answer is yes or no. It's not my birthday is. And so building into technology, into online services, the approach of only collecting the data is absolutely relevant. And then collecting it in a way that actually doesn't involve personal data. You know, I remember two, three years ago in Australia, platforms were just saying, email as a copy of your passport, which is just from a privacy norm setting and cybersecurity approach, all terrible. Inevitably, the third party companies that get used, there's a data issue, they get hacked, they lose the data. And so we have to start thinking about how to answer these questions. Structurally much better now, because the questions are only gonna get more profound as the AI age kind of grows. You're listening to

 

Jon Bateman: So let's move into discussing some of the potential solutions, the current solutions, the future solutions. You have a solution. We're looking at it right now. It's this orb. Before we get to the orb, though, I think we should try to set the table of ways in which society, companies, governments have tried to solve this problem up until now. I mean, many people will have used a CAPTCHA before, right, where you're asked to. Click on the photos that include a traffic light. It's always traffic stuff. That's one way to try to prove humanness, to create some sort of technical challenge or puzzle, a hoop that someone needs to jump through, that we believe a human could easily solve, but a computer could not easily solve. What do you think of that?

 

Nick Pickles: I don't want to shock you, but AI has now reached the point where it can correctly identify the bicycle in the boxes. We've been past that point a while. There's various research has been done that shows AI has been better than humans at doing those puzzles for a while now. So why am I still seeing them? I did one today. So I was talking to someone from a bank actually. And the way they looked at this point is they know the problems there. They know it's causing, you know, costing them money, but this... The cost of ripping out your old architecture and building something new comes with a business cost, comes with the capital cost, comes with just a bandwidth cost. So there is an assessment made in businesses every day, which is basically how expensive is the problem right now? How expensive is changing to something new? And we'll just eat the cost for as long as we can rather than shifting beyond it. So you're still seeing captures on websites because that's the solution that was being built for several years. But are those captures meaningfully stopping advanced systems? No, is the answer.

 

Jon Bateman: So that is so important, and I think this idea that companies need to reach a pain threshold for solving the problem. I mean, what strikes me is that pain for the company is different than pain for user and pain for society, right? So in a social media platform, for example, if I encounter a bot or a slop, that's bad for me. It might be bad for the platform. If I get so frustrated that I leave, or if an advertiser gets so concerned by that that they stop buying an ad, it also has some benefits to a platform if that account or content is getting high engagement because it is exquisite slop. Or if it adds to a metric like daily active users or weekly active users that investors and advertisers care about. So it seems like It's actually not a good system if we're just waiting for companies to reach their pain threshold. That could be different than my pain threshold

 

Nick Pickles: And I think you'll see some companies will try new technology. You may see some new companies emerging. Actually one of the things that happened recently was DIG.

 

Jon Bateman: This is a kind of a Reddit-like site.

 

Nick Pickles: Um, if you, if, you know, the, no one knows you're a dog cartoon, you probably know dig. But they, they put a statement out, right. I have to explain what red it is, but yeah, go ahead. Um, are we that old? Um, no, the management team put a statement out basically saying, you know, we're closing down. And the reason for it was that they, they were expecting challenges, bots and spam. They put in place in their view, state of the art defenses, and they were just overwhelmed. To the point where it was just.

 

Jon Bateman: Yeah, it's an interesting example because apparently, one of the top Google search terms now is Reddit. In other words, and I've gone through this as a new parent, you can type in best organic baby formula, or you can Type in best Organic baby formula Reddit. And adding the Reddit to it is sort of a signal of, I'm specifically interested in diving into. A human discussion where I can look at a back and forth amongst individuals who have physically faced this problem and kind of read their discourse and their thought process and hear about their experiences, right? So adding Reddit to the end of the Google search is intended to bring me into that human space and take away the slop, but Reddit itself is increasingly bought content. And so...

 

Nick Pickles: Not going to work anymore. Well, and Reddit have been talking about this publicly. They're thinking through how to tackle these challenges. And I think particularly then when you get to Google searches, if you like, kind of the web 2.0 kind of interaction with that data, the next evolution is going to be you'll be interacting through an LLM, which is trained on a bunch of sources, Reddit probably included, and where things like someone's post joking on Reddit that the way to make the pizza The cheese on your pizza stickier was to use kids glue because it's edible and then that got recycled into this might be urban legend So no, no, this is true. Yeah, but the you know that gets recycled into an answer

 

Jon Bateman: on a Google AI overview.

 

Nick Pickles: So I think that kind of, so the question of how AI companies will know the training data they're using. Is content from humans. I think particularly, you know, Reddit and X are pretty unique in this sense. They are real time. Like living breathing platforms where minute by minute as world events change, geopolitical events, sporting events, people are posting about them in real time. That's getting pulled into LLM models.

 

Jon Bateman: Yes, I mean, just to speak to the real-time value here. So when the US attacked Iran, there was an image that circulated of the White House situation room monitoring what was happening during that initial attack. And on the screen, one of the things that was up was just a running feet of X with the search term Iran. So there's tremendous value in these spaces that are increasingly. Poorly governed and overrun by AI that's it's a real problem right that that's some of our best sources for unique types of information as many problems as they have there are increasingly overrun um okay so captures that's not going to work what are the other solutions i know many people have may have had the experience of um providing a credit card number for some sort of us to sort of prove that you're real or prove that you are willing to pay for something, create a little bit of a cost, maybe there's no charge at all. Is that a good solution?

 

Nick Pickles: So again, it depends on the availability of, if you're a bad actor, can you get hold of credit card data online to bypass those systems? The answer seems to be yes.

 

Jon Bateman: Okay, so your sense is credit cards, same problem. There's Russian cyber criminals who are just trading these things by the thousands. So that's not a complete solution. Government ID, we've already talked about that. That just seems to be a huge privacy challenge. Is there a way in which I can provide my government ID to a website or online service without taking on that? Privacy issue? Is there some kind of middle layer that could solve that?

 

Nick Pickles: Yes, technology is called zero knowledge proof, for example. To some extent, you have to step back, though, and ask, what's the question you're answering? Is the question, are you a human? Well, providing a photograph of a government ID doesn't answer that question. Why not? So again, we're old enough. I don't know how you got your fake IDs, but I think I physically had to cut something out and go find a laminator and my student university ID thing that I carried around my wallet. Apparently, things have changed since then. And you can use Gen.ai products to make you a selfie holding a super realistic, flawless driving license. And then you send that photograph to an online service provider who sees it and goes, oh, yeah, OK, date of birth, say you're over 21, whatever. State of Maryland driving license, so number one, the ability to validate, is that a real government credential?

 

Jon Bateman: I guess they could always try to match it to some private government database to prove that that's a real barcode that represents a real item in the ID database.

 

Nick Pickles: Yeah, and that then becomes a question of, so now you need a government database that is exposed to any private company that wants to use it to validate identity documents being provided. So you've now taken the problem of, let's say, age verification, and to solve the problem of age verification you've built a complete new government database of every citizen, which is then a vulnerability, people will try and exploit it. All to solve what should be a relatively simple question, which is are you over 18, are you 16 to use? Yeah. Whatever. It's.

 

Jon Bateman: t's an interesting point. I guess the US, for example, I mean, we're unique in our system of federalism, but there is no single database that has all

 

Nick Pickles: The UK, for as long as I've been in policy, the UK has been having a pretty existential debate around, do we want a national identity card? They don't have one. Most countries have passports. But again, even within countries, they're not universally distributed. I think the US is, you know.

 

Jon Bateman: Yes, probably uniquely low rates of passport-having here, yeah.

 

Nick Pickles: Um, like

 

Jon Bateman: Why go anywhere else?

 

Nick Pickles: Yeah, I mean, I live here, I love it. And so that question of you're now creating this enormous technological bureaucracy to answer what should be a simple question. So I think, again, how we looked at this is, back to the dog, separate the question of we're not trying to solve who you are. We're trying to solve the question of are you a human, because in our view... The majority of online transactions don't need to know who you are, because if you can strip out the non-human traffic and have a situation where everyone on your social network is a real human, everyone trying to buy a ticket for this concert is a really human, everyone you're swiping on a dating app is a human. That is the challenge those platforms are facing. They don't to KYC all of their, you know. KYC me, know your customer. So it's kind of like you know And it kind of goes back to that point of, you know, again, use my experience of Twitter. Like Twitter's value, X's value to democratic society would be an order of magnitude worse if X was required to know the full name, date of birth, social of everyone using that platform. Like the ability to use it in a pseudonymous way is intrinsic to its value as a freedom of speech.

 

Jon Bateman: Yeah, I agree with that. Yeah, Facebook has a real name policy. I think it's probably relatively poorly enforced, but that's their policy. And the discourse there is different. It is shaped by people's awareness that employers, friends, and others can identify them. Whereas X, for example, has actually spawned this whole group of, I think it would be fair to call them intellectuals who are actually writing anonymously.

 

Nick Pickles: In Japan, again, it was super common. People would have a work account, they'd have a burner account, they'd a fandom account, they'd have a gaming account. Super common to have multiple expressions of the same person. Some more identifiable and some less identifiable.

 

Jon Bateman: I'm told that the kids on Instagram do this as well, but I don't have personal knowledge of such matters. So let's talk about biometrics now. We're now getting closer to the orb, right? So biometrics would be one way to prove that you are human and under certain circumstances who you are as a human. This is the route that Tools for Humanity has taken. You built this orb that scans your eyeball. There's other approaches too, fingerprints. Face ID, why scan my eyeball? What are you learning from that? And how does that help you identify me or as a human?

 

Nick Pickles: So key point, it's a photograph. Um, this, this is, um... Does scan just sound like too creepy? Well, I think it's a different process. Um, I mean, the, um, this is a state of the art camera. Has a number of lenses in there. Designed to answer two questions. Is this a real living person in front of me? So it's not an iPad being held there. It's not someone trying to game it. It's someone wearing contact lenses to try and deceive the camera. And then the second question, which you're right, we use Iris biometrics for, is to answer the question, have I seen you before? Are you unique? And the reason for that is that we think it's actually incredibly important and intrinsic to the value of the world network that each human can only have one world ID. And the reasons that when we were looking at the biometrix and would recommend people read the white paper for this team spent several years going through. Fingerprints don't scale to the whole global population. Once you get above a certain data set, they start to duplicate.

 

Jon Bateman: It is, they're not actually totally unique despite the mythology that they are, yeah.

 

Nick Pickles: Um, one of the, uh, you know, challenges we face now, she was crazy. I joined the company November, 2024, a month later was in London and there was a fairly well known bank who were just launching, uh your voice is your password. And we kind of found this remarkable because, you know, voice was one of biometrics that we looked at. And it became pretty clear that voice at this point can be near perfectly, if not perfectly cloned using AI products.

 

Jon Bateman: It's astonishing that a major financial institution would go that route as late as time.

 

Nick Pickles: Having worked in tech, my guess there was someone had been trying to get that shipped for so long and they got it through the compliance review and the legal review and eventually it's like, we're going to ship this, and it probably got turned off a week later when they realized that it was gameable. The technology company bureaucracy is real, but the voice and face are biometrics you could use, but are biometrics that are now absolutely AI compromises their value certainly as a biometric for this kind of system.

 

Jon Bateman: So why couldn't I create an AI-generated photograph of an iris and hold that up to the orb?

 

Nick Pickles: So that's the reason why there's not just one sensor in this. There's a whole range of sensors looking at things like heat. So the heat pattern of an iPad versus the heat pattern of a face. There's actually an AI. So to go back to the architecture point, one of the things the team decided was they wanted the data processing to happen locally. So as a result, the compute is happening on the orb. So there is an AI module running several models inside the orb. And one of the models that's running is to make sure that what is being seen through the front of the orb is a real human. And the benefit of obviously processing it on the orb is, from a privacy point of view, the personal data flows from your smartphone to the orb. The orb takes the photographs and the personal data flows back to your smartphone. That's a fundamentally different, it's a fundamental decentralized way of building something. Most companies would have processed this in the cloud. And had a server that stored all this data centrally.

 

Jon Bateman: So what happens to the photos of my eyeball or the mathematical expression of those photos in terms of some kind of cryptographic output? Where does it live? Who has access to it? And then how could I then reach back and touch that later to prove to someone that I'm a human?

 

Nick Pickles: So the data stays in the secure enclave of your device. So it's encrypted within, it's not kind of a regular photograph that sits on your phone, that can be accessed whenever. It's stored in the secured part of your your device, and we don't have a copy of that. So you would be using your dating app. You would say, I'd like to verify I'm a human, brings you into our app. Do you wish to share with X app that you're a human? Yes. And then it would share back an anonymous answer, yes, is human. It's not sharing any personal data. It's no sharing any of those original images. It's just sharing the binary answer, your world, sharing your world ID with that service. And I think that goes back to this question of, you know, we're answering the question, are you a human? We're not saying, taking to the dating app who you are. We're no sharing government ID, anything like that.

 

Jon Bateman: Are you saying I am a human or are you saying I am unique human?

 

Nick Pickles: No, that's absolutely right. Unique human.

 

Jon Bateman: So for example, if I were to go on a dating app and I were behave abusively on that platform and be kicked off, and then I were try to recreate a different account under a different name, but I still have my one world ID, whatever you want to call it, would that be spotted?

 

Nick Pickles: Yeah, the reason, again, the uniqueness is important is if you try to reuse that, whether you are creating a new account having been removed or trying to create a second account, the service provider would know this world ID already has an account, already previously had an account on our service, which from an integrity point of view, whereas if you didn't have that uniqueness, it was just proving you were human, well then you could create endless fake accounts. With the credibility that they are human. So the uniqueness piece is really important to bring the integrity back to those online spaces.

 

Jon Bateman: So this is an early stage company. This is an earlier stage product. I think you've, I'm going to say scammed again, I think, you've photographed maybe about 17 million people around the world. There's all sorts of questions I could ask about whether people will go for this, whether this company will work. I think that's less interesting to me than just if this does work, what portion of the problems that we've been talking about will be solved? We've been describing in pretty grave terms a kind of authenticity crisis and maybe the internet about to fail in some basic way, service degradation, so on and so forth. So picture a future in which every single human being has a world ID through this orb. How does that differ from where we live right now? What does that solve?

 

Nick Pickles: I think the key thing is that you go back to this mindset of you are engaging with real people. Does it solve the fact that some of those real people will then potentially do bad things? No. And that's something that society has always had to deal with. But I think particularly if you look at this through the mindset of the amount of resources right now from either government, private sector that are being absorbed, trying to combat the fake accounts, the bots, the spam. Imagine if you could free up that resource. To then focus on going after the bad actors. I think that is something that is underappreciated. That's really.

 

Jon Bateman: That's a really interesting way of looking at it. Yeah, I mean, we want to reduce to the extent possible the tax of distrust, insecurity. There will still be some left. Let me just hit you with a bunch of ways in which, if I were a bad actor, I would try to work around what you've built. And you can tell me if that would work or if you've got some way of dealing with it. So one issue might be that all the humans have these world IDs. But maybe they start passing them around. Maybe they start selling them. Maybe you can steal someone's world ID. Maybe if I'm a Russian spy and I want to create some botnet, like I used to influence the 2016 election, I just go to a bunch of Russians and say, give me your world IDs, or I pay them what's to prevent that from happening.

 

Nick Pickles: So we have a bunch of security in place, particularly when it comes to people sharing world IDs to stop that happening, because we did see people trying to do that. So without going into too much detail, there's device level protections, and then there's, if you like, network level protections against that.

 

Jon Bateman: So it kind of has to live on your phone in some way and then you guys are monitoring the network and looking for suspicious behavior

 

Nick Pickles: Yeah, and even the, and I should add, one of the other things we made the decision is that actually the orb itself is open source. Cause something else that someone could attack is actually the hardware itself. And so the ability to validate that, this is a trusted device that's running as it should be is critical. So.

 

Jon Bateman: Yeah, so why couldn't I either get one of these orbs, crack it open and start messing with it, or create one of my own through my own component parts and then start spoofing what you're doing and generating my own fake IDs?

 

Nick Pickles: There's an array of technology in the orb itself that designed to detect and deter tampering. And again, not gonna give too much away about that, but there's a significant proportion of our investment went into the making of that itself.

 

Jon Bateman: Let me see. I think this is an interesting problem, though, and I respect your desire to not kind of want to give away the keys to the kingdom and create like a public map of security vulnerabilities because bad guys are listening right now. I'm sure I've got a bunch of nefarious. This is part of my audience.

 

Nick Pickles: This has been, not this version, but an earlier version did go to DEF CON.

 

Jon Bateman: OK, so we have a big hacker conference.

 

Nick Pickles: We have actively sought out kind of how to make sure, because for us, obviously, the core thing at the heart of the project is the orb. So we need to be able to demonstrate and earn people's trust.

 

Jon Bateman: Yeah, that's what I was going to get at, because on the one hand, you don't want to create this map, this treasure map for the bad guys, say, okay, this is how you could try to hack us if you wanted to. And so that suggests, don't give away too much information about how you're securing it. On the other hand, You need to create enough trust that people both want to become users of the system, companies want to incorporate it into their services. And also, you know, def- future internet that you portrayed, where I could just go online and just relax and just say, I know the people on Reddit really are people. How can I trust that without being able to crack open the orb and, you know, know everything that you're doing to secure it?

 

Nick Pickles: Well, some of that is going to come from, if you like, the kind of the network level people you trust, institutions you trust will audit the technology, you know, a key part of any company integrating new technology is doing that kind of deep dive into the hardware and the software.

 

Jon Bateman: Right, by independent professionals who can validate it.

 

Nick Pickles: Yeah, and so we've had a number of those security audits done. The other piece is the, you know, things like the zero-knowledge proof libraries that we use are open source. Again, so we're really leaning on open source technologies as a way of people being able to investigate and audit what we're saying. And our hope is, again, we're approaching this that we think this is an existential question, but we don't see this as a situation where we are the only There's going to be a range of different solutions. We think this is the most advanced way of solving the are you a human question. And then other technologies will be built on top of that that may solve the identity question, for example.

 

Jon Bateman: I think part of what you're alluding to also is fears of centralization. We'll go back to X, always a good whipping boy. They have had high profile user accounts of politicians and celebrities compromised or stolen in part because there's an insider threat element. Somebody might work on the security team of one of those companies and just get paid off by a criminal or they could be hacked by a state actor. You've created this incredibly valuable set of information, and the more you succeed as a company, the more valuable the information is, and then you eventually attract these threats who want to hack or compromise you.

 

Nick Pickles: No, it's the reason that we have made some of the fundamental architecture choices to build this as a privacy-preserving, from the ground up, decentralized, open-source approach, which runs counter to the model that pretty much any large tech company has built. And then to add a layer of zero-knowledge proof behind that. So the kind of tracking that you see online where you're followed from service to service to actually design that out. From the beginning, it makes it harder to build a business at scale, but for the long term, it actually reduces the risk of some of those issues because centralization is a perfect example of how by centralizing data, you do create a focus for attack. By decentralizing, you change that threat model, but it also goes back to the point of not trying to. Not trying to build a single piece of architecture that solves every problem. So we're not trying to solve who is every person on the internet and what are they doing. We're trying to solve, is the person that is using this service you're engaging with a real human, and then we'll leave it to other services to solve the other pieces of it.

 

Jon Bateman: Now, I think one interesting question for services like this is a bot is not always nefarious, right? The idea of AI agents swarming the internet, you know, that can seem threatening, disturbing. It can create all sorts of havoc and confusion. But also, the pitch by the AI companies is these agents will be acting on your behalf, doing things that are inconvenient for you. Delta. They'll go out and book your haircut appointment, right? And I've started to play around with some of these agents, as have others. Where does this proof of human fit in to a world where we are all increasingly wanting agents to work on our behalf, and maybe a typical internet experience for a user is some mixture of me declaring an intent and then an AI agent executing on that.

 

Nick Pickles: I think, well, to your point, AI agents are kind of the new evolution of bots. There have always been good bots. There were bots that were doing public interest work, there were bots that were monitoring. My favorite one was always, you know, various parliaments and congresses around the world of who was editing Wikipedia pages from the IP addresses within those buildings and the bots would just post tweets of the edits being made.

 

Jon Bateman: Right, showing that a staffer was like burnishing his boss's own Wikipedia page.

 

Nick Pickles: Or tarnishing someone else's, which is always more interesting. But I think agents are going to raise the same questions of, there will be some settings where a service may say, they're totally comfortable having an agent transacting with that service, using that service without authentication. There are gonna be other environments, I think probably the majority of it, where if you're a business, if you a government, if you are a hospital, you will want some level of authentication. And again, the question maybe you just want to know a human has authorized this. So you're booking a haircut and maybe the hair salon wants to know, has a human authorized this agent to book this appointment? Or is it an agent trying to book any available? Appointment time in the Eastern Seaboard of the US. They'll want to know the authentication. Same true for engaging with, maybe you have an agent doing your taxes, dealing with a medical provider. They'll wanna know probably in that situation, is it you that has authorized this agent as opposed to someone who's trying to see your medical records. So that level of, I think for agents to work at scale and to be something that businesses and governments want to. Integrate into their services, authentication is going to be a really critical way of managing that inflow of traffic.

 

Jon Bateman: So it sounds like you're saying on the future internet, each platform or company or service provider is going to be making its own choices about the type and level of authentication that it wants in its community or its space. Do you need to prove that you're a human, or maybe we don't even need that? Do you to prove you're specific human, or maybe we don't even need? Are we going to allow human agent teams to be represented here, delegated activity? And so each platform is gonna make its own choice and we're gonna have maybe more restrictive spaces and maybe looser spaces. And then we as users will then be facing our choice about what kind of transactions do we wanna do in what kind spaces.

 

Nick Pickles: And I think questions like liability will then come into it. So for example, if a platform has verified that you're a human versus an AI agent posted something, an AI agents bought an ad, an AI is running a promotion to try and get people to visit a website, that's all different types of interactions. So maybe for the organic social media post, you'll allow an agent. For an advertisement, maybe you'll have a higher level of authentication. Or you'll want to at least have a human sign off that this is, yes, I want to buy this advertisement. I think where you do commercial transactions, there'll be a higher level again, probably some degree of know your customer. You look at the stock market, people will have agents making trades, agents doing things that traditionally have been done by brokers, agents talking to agents instead of lawyers talking to lawyers. All of those things will require a liability framework, a legal framework. And at the root of it, I think it's going to be incredibly important that someone can say, is this agent being directed by somebody? And again, not who is that person, but is this agent acting with human authorization, or is this an agent that's acting on its own initiative? Being able to separate those two out, I think, is going to increasingly foundational to agents becoming integrated in the economy.

 

Jon Bateman: As we wrap up, I'd love to just ask you about the geopolitics of all of this. And maybe here's one way to think about it. We've been describing the internet as having these problems and the internet as needing some kind of solution. But increasingly, there really isn't one internet, right? We're seeing a kind of fragmentation or balkanization of the internet into sort of different spheres, right. I mean, China and the US are increasingly It's just going to be separate internets with. Separate service providers, obviously different ISPs, different rules, and a lot of the world is going to kind of be choosing whether they want to be on the US internet, the Chinese internet, some combination, Europe, India, and so forth. So do you expect a solution like Tools for Humanity to truly have global implementation and be one of those core protocols, like TCPIP for those who are super nerds out there, that. It really is common across the global internet? Or do you think it's likely that different parts of the world will choose different authentication systems because of different political choices that they want to make about the level of anonymity that they find acceptable, or because of geopolitical distrust and not wanting to give the adversary some kind of access to sensitive information?

 

Nick Pickles: I think it will come down to, in large part, the models of governance for the intranet that runs in certain parts of the world. Where governments see the internet as being fundamentally an open technology and where they want to preserve their citizens' ability to use the internet freely with maximum privacy, then this technology is something that I think achieves. Goal that we set out to solve this challenge of how do you distinguish human from non-human internet traffic in a way that protects the privacy of their citizens. That to me, the majority of the planet, democratic norms, free societies, that technology, the way that we've built this technology is compatible with freedom-based society. There are some parts of the world who will say that they want to take a different approach And there are some parts of the world that will, again, have this hybrid approach. Some countries may say that there is a government solution, a private sector solution, there's a multitude of solutions. That's the great thing about the internet is that you can build technology for anywhere in the world and then scale it. Our job is to explain how this works to governments around the world. And I think particularly it comes to governments themselves are now seeing themselves as victims of fraud. Victims of people using AI, using bots to attack government. This becomes infrastructure that actually, in some cases, strengthens democratic society, strengthens freedom at home. And there are some countries who will take a different path and they're countries where we won't operate.

 

Jon Bateman: There's gonna be a group of countries out there that are kind of somewhere in the middle and will be considering their options, right? And they might consider whether it's this orb or some other privacy preserving infrastructure to sort of signal humanity and authenticity, or they might considered a Chinese offering or a Russian offering that is based on completely different principles having to do with total surveillance and control. That will be a diplomatic fight in the future as to who wins those countries. Final question for you, Nick. Will this work? I mean, you're at a startup. This is early days. We started off with the sense of doom that many people out there have, that the internet is in crisis and flailing, if not failing. What do you think? Are your kids gonna have a better internet? Will this work?

 

Nick Pickles: So I think the answer is it's already working. It's a question of scaling. So if you live in Japan right now, you can use this on Tinder. You can prove you're human. If you use Razer, the gaming service, you can already use this. We're talking to a number of commercial service providers about how this technology can improve them. So I the question is, as you start to get those integrations with services people use every day, this technology starts to see, Again people start to feel the benefit of having a space online where they know they're interacting with real humans. And again, to your point, you see that people who've been seeing service degradation now start to see service improvement. That then becomes a competitive advantage for those services. So I think it's a question of this technology, it's not a question if this technology scales, it's question of when it scales. Because the fundamental challenge that we're trying to solve is so existential to the future of the internet that in some way or other, every online service and every government is going to need to figure out a way of tackling this problem.

 

Jon Bateman: I think that's my takeaway from this conversation. I don't know which technology is the solution to this problem yet. I know there's a very serious problem. And if we don't get ahead of it, we're gonna suffer the consequences. I also know that most of the solutions are gonna create more problems of their own and then we're going to have to solve those. So Nick, good luck to you and thanks for joining the World Unpacked.

 

Nick Pickles: My pleasure, thank you.

Hosted by

Jon Bateman
Senior Fellow and Co-Director, Technology and International Affairs Program
Jon Bateman

Featuring

Nick Pickles
Chief Policy Officer
Nick Pickles

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from The World Unpacked

  • Elon Musk
    Podcast Episode
    Elon Musk Is Reinventing Capitalism

    On this episode of The World Unpacked, authors Quinn Slobodian and Ben Tarnoff explore Musk’s historical meaning and debate the politics of technocracy with host Jon Bateman.

      • Jon Bateman

      Jon Bateman, Ben Tarnoff, Quinn Slobodian

  • Podcast Episode
    Why Orbán Lost and What Happens Next

    Tom Carothers, a top democracy scholar with deep ties in Hungary, joins Jon Bateman on a special episode of The World Unpacked.  

      • Jon Bateman

      Jon Bateman, Thomas Carothers

  • Podcast Episode
    Inside the Hidden World of Think Tanks

    Tino Cuellar is president of the Carnegie Endowment for International Peace, a premiere foreign policy think tank. He joins host Jon Bateman on The World Unpacked to pull back the curtain on this hidden world.

      • Jon Bateman

      Jon Bateman, Mariano-Florentino (Tino) Cuéllar

  • Ali Wyne
    Podcast Episode
    What Trump Really Wants From China

    Ali Wyne joins Jon Bateman on The World Unpacked to explain why Beijing hasn’t saved Iran; what Washington’s bipartisan “consensus” on China still misses; and how Trump should negotiate when he finally sits down with Xi Jinping. 

      • Jon Bateman

      Jon Bateman, Ali Wyne

  • Podcast Episode
    Inside the Pentagon’s AI War Machine

    In this episode of The World Unpacked, Katrina tells host Jon Bateman about the creation of America’s AI war machine, the rise of Palantir, and the fully autonomous weapons already being tested.

      • Jon Bateman
      • Katrina Manson

      Jon Bateman, Katrina Manson

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.