OpenAI近期正在緊急招募一名新的「安全防範負責人」,其核心職責是預判公司模型可能造成的潛在危害及濫用風險,並為整體安全戰略提供方向指引。該崗位年薪高達55.5萬美元,並包含股權激勵。

此次招聘正值OpenAI面臨持續的安全爭議之際。過去一年中,公司因旗下產品如ChatGPT對用戶心理健康的影響而多次受到指控,甚至捲入數起與非正常死亡相關的訴訟。CEO山姆·奧爾特曼在社交媒體上發文承認,2025年已初現模型對心理健康的影響,且隨著技術能力提升,「切實挑戰」也在增多。他強調這一職位在當下「至關重要」,同時坦言工作「壓力巨大」,入職後需立刻進入高強度狀態。
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities. We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent; a lot of ideas that sound good have some real edge cases. If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying. This will be a stressful job and you'll jump into the deep end pretty much immediately.
根據職位描述,安全防範負責人將主導制定並執行OpenAI安全框架的技術戰略,重點聚焦於具備前沿能力、可能帶來新型嚴重危害的技術的追蹤與防範。近兩年來,OpenAI安全團隊人事變動頻繁:原負責人亞歷山大·馬德里於2024年7月調崗,其職責曾短暫由高管華金·基尼奧內羅·坎德拉與翁荔莉接替,但隨後翁荔莉離職,坎德拉也於2025年7月轉崗負責招聘。此次招聘也被視為在安全風波中重建該職能穩定性的關鍵舉措。






