Research Engineer, Responsibility (CBRN) At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know. Snapshot This role is for an engineer focusing on Chemical, Biological, Radiological and Nuclear (CBRN) at Google DeepMind, with a specialism in evaluating and mitigating for CBRN risks - both aspects are crucial for decision-makers to ensure our model releases are safe and responsible, particularly concerning potential misuse or unintended consequences related to CBRN threats. The role involves developing, implementing, and maintaining evaluations and mitigations and the infrastructure that supports them. About Us Artificial Intelligence could be one of humanity's most useful inventions. At Google DeepMind, we're a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. The role We're looking for engineers who are interested in creating and executing evaluations and mitigations in the CBRN domain that we use to make release decisions for our cutting-edge AI systems. You will develop and maintain an understanding of the trends in AI development, governance, and sociotechnical research. Using this understanding, you will help design new evaluations, and communicate the results clearly to advise and inform decision-makers on the safety of our AI systems. In all this work, you will work closely with other engineers and research scientists, both with researchers focused on developing AI systems and with experts in AI ethics and policy. Key responsibilities: + Design, develop, and conduct robust evaluations to specifically test potential CBRN risks in frontier AI models in collaboration with domain experts. + Develop and maintain evaluation infrastructure that is highly secure and performant. + Collaborate with internal and external domain experts to develop robust evaluation datasets. + Develop effective mitigation methodologies to address identified risks. + Clearly communicate results,risks, and key priorities for both technical and leadership audiences, in collaboration with domain experts.. + Engage and collaborate with internal and external subject matter experts across CBRN, AI policy and ethics, model development, security, safety, and science. + Develop and maintain an understanding of trends in AI development, CBRN threat landscapes, governance, and relevant sociotechnical research to inform the design of new evaluations. About you In order to set you up for success in this role, we look for the following skills and experience: + 5 years of experience in a people management, team leadership role of a technical team. + Bachelor's degree in a technical subject (e.g., computer science, engineering, machine learning, mathematics, physics, statistics) or a relevant scientific field with strong computational experience (e.g., bioinformatics, computational chemistry, biotechnology), or equivalent practical experience. + Experience designing distributed systems at scale. + Strong knowledge of and experience with Python. + Knowledge of mathematics, statistics, and machine learning concepts useful for understanding research papers in AI and potentially in CBRN-related modeling or risk assessment. + Ability to present and explain technical results clearly to non-experts and leadership stakeholders. In addition, some of the following would be an advantage: + Master's or PhD in a field relevant to CBRN risk assessment or AI safety. + Experience dealing with storage and handling of sensitive data. + Experience of building evals or developing model mitigations. + Experience developing or working with safety-critical systems. + Experience in CBRN threat analysis, risk assessment, modeling, or mitigation strategies. + Familiarity with relevant datasets, benchmarks, or evaluation methodologies for CBRN risks in AI. + Experience with data analysis tools & libraries for large-scale evaluation. + Skill and interest in working on projects with many stakeholders, including scientific experts, policy teams, and engineers. + Knowledge of international and national regulations or guidelines related to CBRN materials or technologies. The US base salary range for this full-time position is between $248,000 - $349,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process. Application deadline: June 27th 2025 Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy ( .
Job ID: 479285056
Originally Posted on: 6/1/2025
Want to find more Chemistry opportunities?
Check out the 17,664 verified Chemistry jobs on iHireChemists
Similar Jobs