Expert Opinions: A Discussion on the Open Letter by the Future of Life Institute
On April 13, 2023, Turing conducted a LIVE webinar titled “Building Responsible AI: The Human Way.” to discuss the open letter by the Future of Life Institute. The moderator for the event was Sandip Parekh, VP of Turing, and the panelists were:
- Paula Griffin, Director of Product, Turing
- Kai Du, Director of Engineering, Turing
- Srinath Sridhar, CEO & Co-founder, Regie.ai
- Susannah Shattuck, Head of Product, Credo.ai
The webinar kicked off with a discussion on the topic of the recent open letter written to the United Nations by a group of GenAI (Generative AI) scientists that raised concerns about the risks and implications of GenAI.
Introduction
The letter, which was signed by 50 GenAI scientists, outlined the risks of creating superintelligent AI, which could potentially pose an existential threat to humanity. The letter raised concerns about the possibility of AI systems being used for malicious purposes, including cyber warfare, financial manipulation, and social engineering. The panelists discussed the contents of the letter and shared their opinions on the potential consequences of creating self-improving artificial intelligence that could surpass human intelligence. The panelists’ opinions were insightful, and it was exciting to hear from experts in the AI industry.
Panelist Pointers
- Srinath Sridhar, the first to speak, pointed out that Regie takes ethics and bias very seriously in all its developments. He disagreed with the notion that artificial general intelligence (AGI) is close, stating that it’s at least years away if not decades. Therefore, a six-month moratorium on AI development would not make a significant difference. He also argued that there is no containment issue with AI development as there is with nuclear or biological research, as it is unlikely that AI will get out of the labs and take a life of its own to kill humanity. Srinath suggested that regulation on the product side, like FCC or FDA regulation, is more effective than regulation on research.
- Paula Griffin added that the letter was interesting because even though technological advances always seem to come out of nowhere, they are often a natural progression for people who have been working in the field. She compared the introduction of BERT embeddings in 2018 to the current situation with AGI, stating that this is not a sudden breakthrough, but a natural evolution. Paula also noted that a six-month pause would not make much of a difference, as AI development will continue to advance regardless.
- Susannah Shattuck, at this point, stepped in to agree with the other panelists that a six-month pause is not the solution. She pointed out that the letter is not just about the six-month pause—there were feasible suggestions for establishing safety and guardrails around the development of AI systems. She suggested pushing companies to make it detectable when an output is being used that was generated by a generative AI system or a large multimodal model, such as using watermarks. She emphasized that AI development should be done transparently, responsibly, and ethically. Susannah suggested that we focus on building trust in AI systems by making them transparent and accountable.
- Kai Du acknowledged that implementing a grace period of six months where no action is taken may not be feasible, but it can draw attention from the government and prompt policy changes to prepare for future challenges. Kai expressed that Sandip made a good point about this news being dated, as Databricks has already released model Dolly since the letter was made public. Kai pointed out that we may see a lot of exciting progress in the coming weeks and concluded that it’s challenging to pause everything given the rapid pace of development in the field.
Webinar Audience Poll Results
Of a live audience of 75+ participants, Sandip noted that almost 60% answered YES to the poll question: “Do you think there should be a pause on the training of AI systems more powerful than GPT-4 for at least 6 months, as suggested in the statement?”
Sandip then went on to detail how there are some very interesting viewpoints being taken by people on both sides of the conversation.
Conclusion
It’s clear that the open letter has sparked a lot of conversation in the tech industry, and it is crucial to have these discussions about responsible AI development. The opinions expressed by the panelists suggest that a six-month pause on AI development is not a viable solution to ensure the responsible development of AI systems. Instead, the focus should be on building transparent, responsible, and ethical AI systems that are trustworthy. The panelists emphasized the importance of regulatory frameworks that ensure the safe and ethical development of AI. Overall, the discussion provided valuable insights into the future of AI development and the need to build responsible AI.
Request Full Webinar Recording
Thanks for reading this far! If you enjoyed this content, don’t miss out on the rest of the webinar that focused on the need for building responsible AI and ways to achieve it. The panelists offer their insights on the need for transparency, responsibility, and ethics in AI development, which are crucial for building trustworthy AI systems. The full 60-minute discussion addresses many other essential aspects of the future of GenAI, with active discussions on the core, need for building responsible AI and ways to achieve it.
Click here to gain access to the full 60-minute discussion.
Custom Assessment Offer
As a limited-period offer, Turing has announced a FREE custom AI assessment worth $$$. To get your assessment or to learn more about the details, please reach out to benazir.waheed@turing.com.
Tell us the skills you need and we'll find the best developer for you in days, not weeks.