Escolar Documentos
Profissional Documentos
Cultura Documentos
Facebook Engineering is a dynamic fast-paced organization, and it consists of more than 40 smaller
engineering teams (which means there are far too many teams to list in this document). Each team is small,
which allows each of our engineers to move fast and have a huge impact at the same time. To help facilitate
the team selection process, we’ve grouped the engineering teams together into broader families. Please
read through these descriptions and let your recruiter know which of these groups sound interesting to you.
CareML - (Seattle)
The CareML team is responsible for the detection, prevention, and remediation of objectionable content on
Facebook (Nudity, Pornography, Hate Speech, Violence, and Gore). We work to keep Facebook a clean and
welcoming platform for all. We build state of the art algorithms for images, text, and video as well as build
infrastructure that is truly "Facebook scale". Examples of projects (1) Automatically reporting suspicious
group posts to group admins for approval. This helps keeps groups clean and on topic. (2) Improving video
classification of suggestive videos through audio analysis.
Misinformation : Fake news and misinformation is one of the most serious problems facing our community
and elections. Our team build systems and develops machine learning models to fight the spread of fake
news. We aim to reduce the prevalence of fake and misleading information in US and other countries by
discovering the actors that promulgate it, the specific content which spreads it, and the engagement
patterns which typify its spread through our community.
Controls: helping our users make choices so that they can better control the kinds of content they see in
their feed -- for example allowing them to express how much we should crack down on Clickbait, or how
much to trust Fact Checkers such as Snopes and Politifact.
Web Experience and Quality: The mission of the team is to ensure that our 2 billion+ users have a world
class experience when they click on any link displayed in their news feed. We do this by modeling and
understanding the web, acquiring Integrity landing page signals; and developing metrics to better
understand user experience “after the click”. The work involves building classifiers that predict the
probability that a particular landing page represents a given kind of bad experience (hate/destructive
conflict, adult, violence, spam, etc.), and then using the resulting scores as Integrity inputs into Feed
Ranking. We would like to understand how bad content *inside Facebook is monetized outside *Facebook.
The team will build and evolve web graph models, and apply classifiers to the content and images on web
pages.
Affective Polarization: “Affective polarization” refers to people's negative feelings toward others who are
different from themselves (e.g., political affiliation). Research indicates that affective polarization has been
increasing world-wide over the past few years. While there are many contributors to polarization, we want
to understand its relationship to social media use and we have a shared responsibility to reduce its negative
effects. We also want to study the separate but related concepts of filter bubbles (only seeing/showing
items that somebody already agrees with) and echo chambers (forming homogenous communities). The
goal of our team is to build a suite of metrics that can measure the extent of polarization of our users, and
to take steps to reduce it. We are experimenting with several different measures to do so - e.g. boost
content that leads to positive cross-cutting interactions, ranking improvements to help present a balanced
view of a hot-button issue, UI displays to highlight commonality, down-ranking toxic comments, and more.
We will also be researching and testing several open questions: how does polarization affect the user
experience; how do products like Groups, Pages You May Like, and Friend recommendations influence
polarization; can Facebook change polarization with future products and features; and how reactive any
polarization metric will be to our changes.
Pages Integrity - (Seattle)
Pages Integrity is Facebook's main abuse and threat prevention team centered on counteracting external
risks to Facebook from the use of Facebook Pages. Our team is building ML/AI technologies, infra solutions
and product use cases to safeguard our platform from bad page actors. The internship project will focus on
the following 2 pillars:
Enforcement: Not everyone has the best intentions when using Facebook Pages. Our job is to identify bad
actors and enforce against them as quickly as possible. We do this by enhancing internal tools that our Page
Operations partners use, generating signals that alert us to possible risks to Facebook, and collaborating
with stakeholders to take the most appropriate action
Machine Learning: Different types of badness exists within the humongous Facebook Pages Graph e.g.
spam, scams, offensive/pornographic content, impersonation etc. Identifying bad pages actors involves
building a diverse set of signals derived from the Page including content (text, photo, videos etc.), links,
connections to other pages and users and page-admin behavior. The ML pillar focuses on building ML and
ranking systems to predict different kinds of badness in Pages. The pillar also supports signals and features
that feed into the rest of the pillars described above.
We have major plans to grow the team in MPK and Seattle. We're are looking for ML/backend generalists
and specialists, and WWW and mobile engineers with a heavy focus on product work.