Our Services Engineering is looking for a Robustness Analysis
Researcher to design, coordinate, execute, and monitor offensive security
activities and penetration tests for ML systems.
In this role, you will work with teams of Data Scientists, Product
Managers, Engineers, and others to review proposed features/models and conduct
assessments of potential risks and who will be impacted. You will also develop
mitigation plans and use your expertise to help set policy, as well as
coordinate offensive security activities and penetration tests. You will impact
customer-facing experiences by delivering exciting oral, written, and visual
presentations about our findings.
Key Qualifications
Strong
understanding of machine learning principles.
Experience in
identifying vulnerabilities, anomaly detection, or red teaming.
Expertise on bias,
discrimination, or other safety issues in AI and ML systems.
Strong verbal and
written communications skills with the ability to work effectively across
internal and external organizations and virtual teams.
Ability to work
both independently as well as lead a team of technical testers on penetration
testing and red team engagements.
Experience with
scripting / programming skills (e.g., Python, PowerShell, Java, Perl etc.)
Experience
communicating complex security concepts to technical and nontechnical
audiences.
Preferred:
Knowledgeable about Large Language Models.
Education
Degree (advanced degree preferred) in Computer Science, Machine
Learning, Statistics, Social Science, Human-Computer Interaction or related
field.