At a time when artificial intelligence (AI) is no longer a laboratory breakthrough, but a geopolitical and developmental force, Google DeepMind is reshaping the way it works with governments. This week, they announced their national partnership for AI with the Government of India, which includes working with the Research National Research Foundation (ANRF) to make their scientific AI models more accessible, as well as supporting IIT-Bombay with a $50,000 grant to use Gemma to process Indic language health administration and policy documents to create a novel ‘India-centric specialty database’.

For Owen Larter, senior director and head of frontier policy and public affairs at Google DeepMind, this is not just expansion into a larger market, but reflects a view that India’s position with strong ties to the developing world is uniquely catalytic. It should play a role in shaping how AI benefits are distributed more evenly across geographic regions. Yet, Larter argues that the responsibility to ensure AI works safely for everyone also lies with the companies creating frontier systems, who must actively ensure that governments understand what these technologies can do. He suggests that transparency is a prerequisite for effective regulation. Edited excerpts:
Q. Google DeepMind has been vocal about the extreme risks from advanced AI. How do you prioritize near-term harms over long-term existential concerns in your policy work?
Owen Larter: This is a really important conversation, and obviously our mission is to develop advanced AI and bring it into the world responsibly. We are excited by how people are using this technology, such as leading Indian scientists using AlphaFold to develop new types of cancer treatments. If we want to continue to make progress in this, we need to make sure that this technology is trustworthy and we need to continue to build the governance framework.
There’s a bit of risk in splitting up the different risks that we need to address. It’s going to be kind of an ongoing journey to come up with a sustainable framework. There are some principles on which we should work. We need to develop a really solid scientific understanding of the technology, what it can do, its capabilities and its limitations. It is important to work with partners to understand the impact of this technology when used in the real world and to test for mitigation.
This is really an approach we need to apply to any set of risks, whether it’s protecting child safety or ensuring our systems are useful across different languages, through to the significant risks of developing advanced frontier systems capabilities that could be misused by bad actors to carry out cyberattacks or create biological weapons. DeepMind has a frontier security framework in place since 2024, which we iterate on over time. This will not be a static issue. AI governance will never be a solved problem, it is an ongoing journey.
Q. Are we seeing different regulatory philosophies emerging globally? Is convergence desirable, or should we expect regulatory pluralism?
Owen Larter: I think there’s definitely similarity in some places. All these different regulatory philosophies are trying to do the same thing; Every country wants to use AI in their economies. But there are also risks that need to be understood and addressed. We are seeing some different approaches, where the EU has gone a little further than before and other jurisdictions. The US is taking a slightly different approach, and California and New York now have some state regulations addressing marginalization. It will continue to develop.
We want to engage and help governments around the world understand the technology. It is our responsibility to share information about what it can do today and where it is going. One piece of this week’s conversation that has been really encouraging is the focus on the importance of developing standardized best practices for testing systems for risks and implementing mitigations before a system is released into the world.
Q. The AI Security Summit series represents international coordination. Which mechanisms are proving to be most effective in translating high-level commitments into policy action?
Owen Larter: The India AI Impact Summit in particular has been really important in highlighting some important issues that have not been addressed as much in previous summits, particularly the importance of spreading access and opportunity with AI and making sure that you are getting it into the hands of people. The multilingual discussions that took place are necessary. This is something we are paying attention to and trying to do more with our grants to IIT-Bombay. Regular discussion at the global level is really important. I am really happy that it will be taken forward in Switzerland and then in the UAE.
Q. Given India’s strength in digital public infrastructure and the scale of its deployment, what unique contribution can it make to global AI governance discussions?
Owen Larter: It is absolutely fundamental that as technology matures, the discussion about how to use it also matures. It’s great that this summit series has been expanded a bit. I think India will be absolutely fundamental to how this technology is developed and used. It is clear that India is going to become an AI powerhouse in its own right. That’s why we are continuously investing here.
Q. At what point does a model become ‘marginal’ for clarity of governance?
Owen Larter: We need to think about different types of systems, to develop an understanding of how they work and the risks. From a legal perspective, it is important to define but the semantics is easier to understand. We consider frontier systems to be the most advanced systems that exist at any one point. Of course, the limits are constantly being pushed, systems are becoming more capable.
One reason we are interested in boundaries is that they can develop capabilities that can create risks. Our framework is a monitoring mechanism as we continue to try to test these systems and see if they develop capabilities that could create some of these risks around biological weapons or cybersecurity. Or gain capabilities that require attention to ensure that humans can continue to manage these systems safely. It’s interesting to see that this is rapidly becoming standard across the industry. We are proud that we moved forward quickly with the security of our border. Dialogue between industry, government and civil society on how to improve these will be vital to continued progress.