Edward Tian has had a busy few months.
In December, the Princeton student used his holiday break to create GPTZero, a tool for educators to help them determine whether student essays were written with OpenAI’s ChatGPT. Buoyed by growing concerns about the emerging technology and the nascent AI boom, Tian’s tool went viral—garnering more than 6 million users in just a few months.
Since then, he’s received calls and meetings from scores of investors, built a team and startup named after GPTZero to help refine the bot-detector, and has secured millions of dollars in funding for their new product: Origin, a web extension that he said can detect AI-generated text on web pages. It’s already gotten the attention of media moguls, tech founders, and venture capitalists with deep pockets.
ADVERTISEMENT
Oh, and on top of all this, he’s graduating from college soon, too.
“It’s definitely been a wild last few months,” Tian told The Daily Beast. “It’s definitely been crazy.”
The same can be said for the tech world more broadly. Since the release of ChatGPT in November 2022, it seems as though the entire world immediately forgot about the old fads of crypto and the metaverse, and started betting big on generative AI like chatbots and image generators. While businesses as big as Google and Microsoft are going all in on AI, there’s a parallel industry of tools to fight and detect these bots quietly growing alongside it—and GPTZero is a large part of that effort.
The startup’s new tool Origin represents another weapon in this growing AI arms race. The app currently works as a Chrome extension that allows users to analyze any text they come across online to see whether or not it was AI-generated. Origin isn’t just helpful for educators trying to suss out if their students generated an essay on the Battle of Hastings; according to Tian, it could also help people like journalists and tech watchdogs identify AI-generated misinformation online.
“Finding the source of information is critically important in a world of fake news,” Tian said. “If you don’t know the source, then how do you trust the information you’re consuming?”
Tian describes it as a way of bolstering media and digital literacy in a day and age where people are growing distrustful of the things they see online—and with good reason. Media companies like Buzzfeed and CNET have already begun quietly churning out AI-generated content. Last month, Insider editor-in-chief Nicholas Carlson announced that its editorial staff would begin experimenting with bot-written content.
Having tools that are able to discern whether something was generated by a bot becomes just as important as the bot that generated it—if not more. That’s why the likes of former The New York Times CEO Mark Thompson and former Reuters CEO Thomas Glocer have begun investing in GPTZero. As the world begins to grapple with the full ramifications of generative AI, it becomes more and more important than ever that we can separate the things that were created by a human and the content produced by a string of code.
For Tian, the impact of tools like Origin can be summarized by why he developed GPTZero in the first place: “Humans deserve to know when the writing isn’t human.”
“The value of human writing is there, but it’s undermined if people can't tell the difference in the information they’re consuming,” Tian said. “And I think that it’s been the case that our eyes are not enough to see the difference anymore.”
The AI boom has created strange bedfellows. During its I/O developer conference on Wednesday, Google announced a whole host of AI-injected products to help users draft emails, generate images for slideshows, and even come up with funny captions for pictures.
However, the company also announced it would be launching a watermark feature on all of their AI-generated pictures so users can know whether or not a picture was created using their image generation tool. The watermark itself is actually in the metadata of these images—so there’s no easy way to immediately tell whether or not an image was bot-made without downloading it and digging through the metadata.
Google also said that they would be flagging AI-generated images on Google Image results that will allow users to spot whether or not a picture was created by a bot. This summer, the company is also launching a new tool for the U.S. called “About this image,” which allows users to essentially reverse image search that shows you where the picture first appeared on the internet based on Google’s indexing.
The irony here, of course, is that Google is also going full bore into AI. Not only is it seemingly infusing its proprietary PaLM 2 into nearly its entire suite of products, but the company is also releasing its Bard chatbot in more than 180 countries. This will put powerful and potentially dangerous bots in the hands of millions—or even billions—of people.
Google is not the only firm that seems to be playing both sides so it always ends up on top. Jack Altman, the brother of OpenAI CEO Sam Altman, has invested in GPTZero using the family’s investment firm Altman Capital. Tian told The Daily Beast that, while Sam was interested, he couldn’t get involved directly due to “conflicts of interest”—presumably due to the fact that he’s the CEO of one of the most prominent AI companies in the world.
It’s clear that as AI has risen, so will the tools and services aimed at fighting and detecting it, like Origin. But more importantly, they could provide us with an opportunity to confront a sort of digital solipsism that seems to be permeating the world as AI grows more and more powerful and prevalent. We need to know whether or not the people we’re interacting with, the content we read, and the media we engage with are real—otherwise, the internet would be a really lonely place.
“I feel like human society will lose its impetus for progress and coming up with new things if everyone’s just using the AI output that takes what’s already on the internet and regurgitates it,” Tian said. “There’s internal value in having a human on the other side, that I think will never change.”