Staff Engineering Analyst, Generative AI, Trust and Safety
- linkCopy link
- emailEmail a friend
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 7 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
- 7 years of experience managing projects and defining project scope, goals, and deliverables.
- 7 years of analytics experience (i.e., BI, data engineering/science, etc.) using SQL.
Preferred qualifications:
- Master's degree in a quantitative discipline.
- 7 years of experience with one or more of the following languages: SQL, R, Python, or C++.
- 7 years of experience with machine learning systems.
- Experience in tuning and applying Large Language Models for data labeling
- Excellent written and verbal communication skills.
About the job
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
As an Engineering Analyst in the Trust and Safety (T&S) Search, Assistant and Geo (SAGe) team, you will work to discover, measure and mitigate user trust risks in search products through scalable solutions. You will build relationships and partner closely with Engineers, Product Managers, Data Scientists and other functions. You will work with a team of high-performing analysts creating metrics, templates and datasets to improve trust and safety protections. You will learn about product design details, product policies and relevant quality signals. You will work on analyzing existing product protections, evaluating content and helping to improve policy definitions. You will also enable the deployment of key defenses to stop abuse, and lead process improvement efforts to improve speed and quality of response to abuse. You will resolve problems at scale either by working with engineers on automated product protections or through vendor support.
At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
The US base salary range for this full-time position is $174,000-$258,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Responsibilities
- Partner with Search Generative AI teams on project scoping, risk assessments and prioritization. Lead projects and cross-functional initiatives within Google, interacting with executive stakeholders from Engineering, Legal, Product teams and more.
- Build Large Language Model (LLM) based models that can evaluate content safety according to our product policies.
- Design and implement product metrics to benchmark user trust risks and track improvements over time. Create datasets for engineers to evaluate and improve sensitive content classifiers.
- Be an expert in search infrastructure, ranking signals and search features. Deliver leadership and impact as for the broader Trust and Safety Search, Assistant and Geo team.
- Be exposed to graphic, controversial, or upsetting content. Perform on-call responsibilities on a rotating basis, including weekend coverage/holidays as needed.
Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google's Applicant and Candidate Privacy Policy.
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire.
If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.