OpenAI CEO on AI Safety Concerns and the Truth About ChatGPT-5 Development

OpenAI CEO Sam Altman recently addressed concerns surrounding the development of the next iteration of ChatGPT.

Sam Altman is a well-known entrepreneur and investor in Silicon Valley. He co-founded the location-based social networking app Loopt in 2005, which was later acquired by Green Dot Corporation. He also served as the President of the startup accelerator Y Combinator from 2014 to 2019, where he worked closely with a wide range of startups and helped to shape the direction of the tech industry. Altman is known for his outspoken views on the future of technology, including his advocacy for universal basic income and his belief in the potential of artificial intelligence to transform society. He is an active investor in various startups, with a particular focus on companies working on cutting-edge technologies like autonomous vehicles and blockchain.

According to an article by James Vincent for The Verge published on April 14, OpenAI CEO and co-founder Sam Altman confirmed on April 13 that his company is not currently training GPT-5, the presumed successor to its AI language model GPT-4, which was released in March. Altman’s comments were made during the “Future of Business with AI” event, which took place at the Samberg Conference Center at MIT, where he addressed an open letter circulating among the tech world requesting labs like OpenAI to pause development on AI systems “more powerful than GPT-4.”

As you may remember, the open letter was published by the Future of Life Institute on March 29, and it was titled “Pause Giant AI Experiments: An Open Letter.” It called for a temporary halt to developing powerful AI systems like GPT-4, sparking a debate within the tech community.

The open letter stated, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” The authors called for a public and verifiable pause of at least six months on training AI systems more powerful than GPT-4. They also urged AI labs to develop shared safety protocols to ensure AI systems’ safe design and development.

Among the notable signatories of the open letter were Elon Musk, CEO of SpaceX, Tesla, and Twitter; Steve Wozniak, Co-founder of Apple; Bill Gates, Co-founder of Microsoft; and Gary Marcus, AI researcher and Professor Emeritus at New York University.

Altman said that the letter was “missing most technical nuance about where we need the pause” and clarified that OpenAI is not currently training GPT-5 and won’t be for some time. Altman remarked, “We are not and won’t for some time. So in that sense, it was sort of silly.” However, he emphasized that OpenAI is working on other aspects of GPT-4 and considering the safety implications of such work. Altman stated, “We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter.”

https://youtube.com/watch?v=4ykiaR2hMqA%3Ffeature%3Doembed

The Verge notes that the debate about AI safety is challenged by the difficulty of measuring and tracking progress. Vincent points out the fallacy of version numbers, where numbered tech updates are assumed to reflect definite and linear improvements in capability, a misconception nurtured in the world of consumer tech. This logic is often applied to systems like OpenAI’s language models, leading to some confusion and potential misinformation.

Vincent argues that Altman’s confirmation that OpenAI is not currently developing GPT-5 may not be of consolation to those worried about AI safety, as the company is still expanding the potential of GPT-4, and others in the industry are building similarly ambitious tools.

In conclusion, Vincent writes, “Even if the world’s governments were somehow able to enforce a ban on new AI developments, it’s clear that society has its hands full with the systems currently available. Sure, GPT-5 isn’t coming yet, but does it matter when GPT-4 is still not fully understood?”

Source: Read Full Article