meta從根本上重塑了其產品安全性和隱私的方法,該方法旨在為人工智能係統管理多達90%的風險評估,以評估其廣泛使用的應用程序,包括Instagram和WhatsApp。

。 href=“ https://www.npr.org/2025/05/31/nx-s1-5407870/meta-ai-ai-facebook-instagram-risks“ target=“ _ blank”> NPR調查 ,目標,目標,目標,目的是加速產品開發。儘管用戶可能會看到新功能更快地推出,但此舉引發了人們對安全性和隱私審查深度的擔憂。

新的AI驅動過程(NPR報告一直在延長到4月至5月),涉及到完成問卷的產品團隊,並從AI中獲得了“即時決策”,從而確定了AI的挑戰,並確定了風險的風險,並予以淘汰。這種轉變發生在中,員工。

審查和反對意見,這意味著您會造成更高的風險。 In response, Meta has asserted that automation will be confined to “low-risk decisions,”with “human expertise”reserved for “novel and complex issues.”

The Shift To AI-Driven Oversight

This automation of risk assessment is a key part of Meta’s broader, aggressive strategy to embed AI across its operations, a direction increasingly evident since early 2025. The company’s substantial commitment includes a planned $65 billion investment in AI this year.

This financial dedication is coupled with significant corporate restructuring, which involves Meta doubling down on machine learning by planning to hire hundreds of AI engineers while cutting 5% of its overall workforce.

Michel Protti, Meta’s chief privacy officer for product, announced in a March internal post that the company is普羅蒂解釋說,“增強產品團隊的能力”和“不斷發展的Meta風險管理流程”。

的目標是通過在絕大多數情況下自動化風險審查來“簡化決策”,就像NPR調查一樣。 This internal push for speed and efficiency through AI also extends to content moderation.

Meta’s latest quarterly integrity report, cited by NPR, claims that Large Language Models (LLMs) are “operating beyond that of human performance for select policy areas”and are used to screen posts “highly confident”not to violate rules.

However, internal documents reviewed by NPR suggest Meta is considering automating reviews even for sensitive areas like AI安全,青年風險和“誠信”(包括暴力內容和誤導性),儘管以低風險自動化為重點。

平衡創新,安全性和監管審查

對AI-LEAD的監督的動力受到技術的競爭的影響。據報導,競爭對手AI模型的性能,例如DeepSeek的R1,在Meta中產生了一種緊迫感。

一位工程師先前描述了“瘋狂的爭奪試圖符合該效率”的“瘋狂爭奪”。這種競爭性環境是META戰略決策的重要因素,包括以前是Messenger負責人Loredana Crisan等領導力變化,現在負責監督該公司的生成AI部門。

Meta的AI治理方法已經開發了一段時間。 In Februar, the company introduced its Frontier AI Framework, a system designed to categorize AI into “high-risk”and “critical-risk”groups.

At its launch, Meta stated its intent: “Through this framework, we will prioritize mitigating the risk of catastrophic harm while still enabling progress and innovation.”

This initiative was, in part, a response to past事件,例如濫用Llama模型,以及諸如《歐盟數字服務法》(DSA)等法規的日益增長的壓力。

Zvika Krieger,Meta的前董事,對新聞社進行了評論,如果您可以流行審查,則可以通過審查來進行審查,“在審查中,” Interestingly, an internal Meta announcement indicated that decision-making and oversight for products and user data in the EU will remain with Meta’s European headquarters in Ireland, potentially insulating EU users from some of these changes, according to the NPR report.

Broader AI Integration and Partnerships

Meta’s AI ambitions extend beyond internal processes and consumer-facing products.該公司在2024年11月更新了其“可接受的用途”政策,允許美國軍事公司使用其大型語言AI模型。包括洛克希德·馬丁(Lockheed Martin),Booz Allen Hamilton,Palantir Technologies和Anduril Industries在內

這包括與Anduril Industries建立合作夥伴關係,以開發高級軍事設備,例如具有VR和AR功能的AI驅動頭盔。同時, Meta的Q1 2025社區標準執法報告強調了與Q4 2022的競爭者相比,在高度降低了50%的降低中,該公司在Q4 2022上的降低了50%通過系統審核的準確性。

Categories: IT Info