top of page
  • Writer's pictureFNBC

Google DeepMind boss hits back at Meta AI chief over ‘fearmongering’ claim

Google DeepMind boss Demis Hassabis has responded to claims made by Meta AI chief Nick Bostrom, accusing him of "fearmongering." Hassabis defended DeepMind's approach to artificial intelligence (AI) and denied any intentions of achieving "regulatory capture" in the field. The debate surrounding AI regulation has been gaining traction in recent years, as scientists, industry leaders, and policymakers grapple with the challenges posed by this emerging technology. While some argue for strict oversight and regulations to prevent potential dangers, others advocate for a more relaxed approach, emphasizing the positive impact AI can have on society. In a recent interview with CNBC, Hassabis expressed his frustration with what he perceived as fear-based arguments put forward by Bostrom. He asserted that DeepMind is committed to ensuring the safe and ethical development of AI and dismissed claims of regulatory capture. Hassabis highlighted DeepMind's ongoing collaboration with external organizations, policymakers, and academic institutions to address the complex issues surrounding AI. He noted that DeepMind actively encourages public engagement and seeks input from various stakeholders to shape its research and development. "We're not trying to achieve regulatory capture. We want to have a debate. We want as many people's opinions in the bucket as possible," Hassabis emphasized. DeepMind, a London-based AI research lab acquired by Google in 2014, has gained recognition for its advancements in machine learning and its development of AI systems that excel in complex tasks. However, the company has faced criticism and skepticism regarding the potential risks associated with AI's rapid progress. Bostrom, a professor at the University of Oxford and director of the Future of Humanity Institute, has been vocal about his concerns regarding the impact of superintelligent AI on society. In a recent interview, he warned that if AI research is dominated by a small number of organizations, it could result in "regulatory capture" and hinder the establishment of effective regulations. While Hassabis acknowledged the need for caution, he argued that collaboration and open discussion are key to addressing safety concerns and preventing the concentration of power in the AI industry. He stressed that DeepMind aims to be transparent and actively seeks diverse perspectives on AI development. "We don't want a small number of organizations deciding what AI will do," Hassabis said. "We need a plurality of people, society, and a lot of different regulators from around the world to really contribute to this." Hassabis's remarks coincide with increasing calls for AI governance and regulation. As AI applications advance and become increasingly integrated into various sectors, there is a growing recognition of the potential implications and risks involved. Efforts to establish guidelines and frameworks for AI development are underway globally. Leading organizations, including the United Nations, are engaging in discussions to ensure the responsible use and deployment of AI technologies. Hassabis acknowledged that challenges lie ahead in AI development and stressed the importance of collaboration and inclusivity to address these challenges effectively. He emphasized DeepMind's commitment to working with policymakers, academics, and other stakeholders to shape AI's future. DeepMind's collaboration with the NHS in the UK has been an example of the company's commitment to this approach. DeepMind's Streams app, developed in partnership with London's Royal Free Hospital, uses AI to help physicians monitor patients with acute kidney disease. The partnership faced criticism, but it also highlighted the potential benefits of integrating AI into healthcare. Critics argue that companies like DeepMind must be held accountable for their actions and the potential risks associated with AI. Concerns have been raised over data privacy, the need for transparency, and the potential biases inherent in AI algorithms. Hassabis acknowledged these concerns and emphasized the importance of striking a balance between ensuring the benefits of AI technology while addressing any negative consequences. He stressed that advancements in AI should be guided by ethical considerations and public interest. The debate surrounding AI regulation is likely to continue as technology continues to evolve rapidly. It is vital for industry leaders, policymakers, and society as a whole to engage in constructive dialogue and work together to establish robust frameworks that balance innovation and protection. Hassabis's response to Bostrom's claims reflects DeepMind's commitment to responsible AI development and its proactive approach to engage a wider range of voices in shaping the future of AI. As AI becomes increasingly intertwined with various aspects of our lives, it is crucial to ensure that its benefits are maximized, and the potential risks are mitigated through collaboration and effective governance.

0 views0 comments

Recent Posts

See All

Schizophrenia Is Coming for the Election-Year Economy The election year economy is facing a unique challenge as the growth surges but unhappy consumer sentiment lingers. This contradiction leaves mar

"Vision, vitality: The importance of yearly eye exams" When was the last time you had your eyes examined? Many people underestimate the importance of regular eye exams, believing that as long as thei

bottom of page