Microsoft president Brad Smith raise concerns against deep fakes
2 min. read
Published on
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
In a recent address delivered in Washington, Microsoft President Brad Smith raised concerns about the rise of deep fakes and emphasized the need for measures to distinguish between real and AI-generated content. Smith’s speech, aimed at addressing the growing demand for AI regulation in the wake of OpenAI’s ChatGPT, shed light on the potential risks associated with the nefarious use of AI technology.
Foremost among Smith’s concerns were deep fakes, which refer to convincingly fabricated media that can mislead and deceive viewers. He stressed the urgency of tackling this issue, particularly with regard to foreign cyber-influence operations already being conducted by nations such as Russia, China, and Iran. Smith called for steps to safeguard against the manipulation of genuine content with the intention to deceive or defraud individuals, utilizing the capabilities of AI.
The Microsoft President further advocated for licensing critical AI applications, accompanied by obligations to uphold security, including physical security, cybersecurity, and national security. His proposal aimed to ensure that the deployment of AI technologies would be governed by strict regulations, mitigating potential risks to individuals, organizations, and national interests.
Highlighting the need for enhanced export controls, Smith offers a five-point blueprint for Governing AI that includes implementing new government-led AI safety frameworks, requiring safety brakes for AI systems, developing broad legal and regulatory frameworks, promoting transparency and access to AI, and pursuing public-private partnerships to address societal changes.
Meanwhile, lawmakers in Washington have been grappling with the challenge of legislating AI control while technology companies of all sizes are racing to bring forth increasingly versatile AI solutions. The urgency to strike a balance between fostering innovation and addressing the potential risks associated with unregulated AI development has created a pressing need for comprehensive and effective regulatory frameworks.
As the debate surrounding AI regulation intensifies, Brad Smith’s speech resonates with the growing concerns over deep fakes and the responsible use of AI. His call for greater transparency and accountability and the need to protect against malicious activities provides an impetus for policymakers, industry leaders, and researchers to collaborate in establishing a regulatory framework that balances technological progress with societal well-being.
Via: Ars Technica
User forum
0 messages