Gaurav Bharaj, Ph.D. '17
Gaurav Bharaj has always believed technology can help unlock human creativity. As a Ph.D. candidate in computer science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Bharaj was part of the Visual Computing Group of Hanspeter Pfister, An Wang Professor of Computer Science. His research included computational design of aesthetically pleasing walking robots, as well as musical instruments whose shape affects the sounds they produce. His studies at SEAS augmented his master’s degree from the University of Pennsylvania, where he focused on computer graphics and game technology, as well as his undergraduate degree in computer science from Punjabi University in India. In between UPenn and SEAS, he spent two years at the Max Planck Institute for Informatics in Germany, where he researched 3D reconstruction and character animation.
“A lot of my motivation has been because I want to help people be more creative and tell better stories,” Bharaj said. “At Harvard, I was trying to understand the connections between graphics, computer vision and the real world. What I got out of Harvard was how to be an independent researcher and thinker. Those are the real tools you learn during your Ph.D. It gave me a well-rounded view of how to be a good scientist.”
After leaving Harvard, Bharaj spent two years at Technicolor, then in 2019 joined the AI Foundation in San Francisco. That was where he realized artificial intelligence (AI), especially generative AI programs such as DALL-E or Stable Diffusion, had the potential to both greatly enhance but also harm creative expression.
“We were trying to create virtual avatars that were hyper-realistic, would generatively think, talk like a human and look like a human,” he said. “We realized that like any technology, this would lead to malicious use-cases, and we wanted to understand if we could protect against that. Once we had the basic research done, we had to figure out how you commercialize it and give people a way to make more-informed decisions.”
That push and pull between the potential benefits and harms of AI has guided Bharaj’s career. In 2021, he co-founded and became chief scientist for Reality Defender, a company that combats AI fraud due to AI generated media. The start-up expanded its Series A fundraising round to $33 million last fall and continues to roll out products applicable to a wide range of industries.
“A lot of our work has gone into creating continuous processes for data gathering, foundation model training and automatic deployment so we’re keeping up with the curve,” said Bharaj. Ph.D. '17. “As the chief scientist, my role is to make sure we have processes in place so we can maintain long term vision of where these fields are heading.”
While text-to-image programs are often the most prevalent examples of AI generated content, the potential need for detecting AI extends beyond the commonly cited example of making something that looks like a well-known, potentially copyright-protected artist. A deepfaked voice could be used to fraudulently transfer funds from a bank, or AI-generated resumes could bog down hiring processes at a company. AI fraud is estimated to cost billions of dollars across the U.S. each year, and the rates of fraud are likely to increase.
“Our largest customers are enterprises such as banks, where they do financial transactions and need to make sure the person calling into the bank is real,” he said. “Or it’s with large media houses, which either have a lot of data or want to publish something and need to know if it’s generated. The biggest challenge is scale, creating a system that’s able to look at the full spectrum of media generation methods and say what is real.”
Despite the risks of generative AI fraud, Bharaj wouldn’t describe the technology as a pure harm. Even as he was building Reality Defender, he also spent two years as chief scientific officer for Flawless AI, which offers AI-powered film editing software.
“I don’t think it’s a good idea to limit people’s creativity by not giving them a new tool,” he said. “At the end of the day, AI is just another tool.”
For Bharaj, the best way to balance AI’s potential benefits and harms is to be certain how the data sets that power generative AI models were built.
“Whether it’s text or multimedia, we need to be able to backtrack and determine where the data came from,” he said. “The underlying question is whether we’re enabling artists’ creative expression, and also meaningfully paying them, or are we just taking their work, being irresponsible and not properly compensating them. These are people who’ve spent a lot of time and resources creating something they wanted to express, so companies need to be responsible in how they use those creations as data. Regulation in this space will come with more awareness of what AI is capable of.”
Moving forward, Gaurav remains committed to developing technology that empowers creativity, drives ethical standards in AI, and builds a trustworthy digital future.
Press Contact
Matt Goisman | mgoisman@g.harvard.edu