Rising home prices and conservative borrowing have today's homeowners sitting on a record amount of potential cash. Today's mortgage holders saw their home equity increase by...Real Estateread more
Stocks have been grinding sideways, but technical analysts say once they breakout, the move to the upside could be powerful.Market Insiderread more
SpaceX is deep into development of its Starship rocket, with recent updates from CEO Elon Musk showing the first one under construction.Investing in Spaceread more
The new wireless earbuds, codenamed "Puget," are expected to come with an accelerometer and be able to monitor things like the distance run, calories burned, and pace of...Technologyread more
The Mac Pro is the only major Apple computer to be assembled in the United States. Most of Apple's products, including the iPhone, are assembled in China and are facing tariff...Technologyread more
SoftBank wants to push Neumann out of the CEO role ahead of the IPO.Technologyread more
Toys R Us' bankruptcy caused a 7% surge in sales for the toy industry during the first half of 2018 as parents stocked up, then sales fell 2% as manufacturers experienced...Retailread more
After an unexpected loss of subscribers and increased competition in the streaming war, shares of Netflix erased all of its 46% gain for the year at its peak and officially...Marketsread more
The UK's Civil Aviation Authority said Thomas Cook had now ceased trading and the regulator would work with the government to bring the more than 150,000 British customers...Europe Marketsread more
"Apple is not only going to make money on their own service they're also going to make money selling everybody else's services, and so will Amazon," consultant Michael J. Wolf...Technologyread more
CNBC's Jim Cramer calls on investors to be wary of the slew of hyped-up unicorn companies going public this year and encourages the focus to be on deliverable earnings.Investingread more
This post originally appeared on Hunter Walk's blog and is republished here by permission.
Since Facebook, Twitter and Alphabet's YouTube have all been vocal (to various degrees) about staffing up the human element of their content moderation teams, here are a few things to understand about how these systems typically work. Most of this is based on my time at YouTube (which ended almost five years ago, so nothing here should be considered a definitive statement of current operations), but I found our peer companies approached it similarly. Note, I'm going to focus on user generated/shared content, not advertising policies. It's typical that ads have their own, separate criteria. This is more about text, images & video/audio that a regular user would create, upload and publish.
Content Moderation or Content Review is a term applied to content (text, images, audio, video) that a user has uploaded, published or shared on a social platform. It's distinct from Ads or Editorial (eg finding content on the site to feature/promote if such a function exists within an org), which typically have separate teams and guidelines for when they review content.
The goal of most Content Moderation teams is to enforce the product's Community Standards or Terms of Service, which state what can and cannot be shared on the platform. As you might guess, there's black and white and gray areas in all of this, which mean there are guidelines, training and escalation policies for human reviewers.
It would be very rare (and undesirable) for humans to (a) review all the content shared on a site and (b) review content pre-publish – that is, when a user tries to share something, having it "approved" by a human before it goes live on the site/app.
Instead, companies rely upon content review algorithms which do a lot of the heavy lifting. The algorithms attempt to "understand" the content being created and shared. At point of creation there are limited signals – who uploaded it (account history or lack thereof), where it was uploaded from, the content itself and other metadata. As the content exists within the product more data is gained – who is consuming it, is it being flagged by users, is it being shared by users and so on.
These richer signals factor into the algorithm continuing to tune its conclusion about whether a piece of content is appropriate for the site or not. Most of these systems have user flagging tools which factor heavily into the algorithmic scoring of whether content should be elevated for review.
Most broadly, you can think about a piece of content as being Green, Yellow or Red at any given time. Green means the algorithm thinks it's fine to exist on the site. Yellow means it's questionable. And Red, well, red means it shouldn't be on the site. Each of these designations are fluid and not perfect. There are false positives and false negatives all the time.
To think about the effectiveness of a Content Policy as *just* the quality of the technology would be incomplete. It's really a policy question decided by people and enforced at the code level. Management needs to set thresholds for the divisions between Green, Yellow and Red. They determine whether an unknown new user should default to be trusted or not. They conclude how to prioritize human review of items in the Green, Yellow or Red buckets. And that's where humans mostly come into play…
Human reviewers help create training sets for the algorithms but their main function is continually staffing the review queues of content that the algorithm has spit out for them. Queues are typically broken into different buckets based on priority of review (eg THIS IS URGENT, REVIEW IN REAL TIME 24-7) as well as characteristics of the reviewers – trained in different types of content review, speak different languages, etc. It's a complex factory-like system with lots of logic built in.
Amount of content coming on to the platform and the algorithmic thresholds needed to trigger a human review are what influence the amount of content that goes into a review queue. The number of human reviewers, their training/quality, and the effectiveness of the tools they work in are what impact the speed with which content gets reviewed.
So basically when you hear about "10,000 human reviewers being added" it can be (a) MORE content is going to be reviewed [thresholds are being changed to put more content into review queues] and/or (b) review queue content will be reviewed FASTER [same content but more humans to review].
The honest answer is "Yes But…."
Yes but Content Operations is typically a cost center, not a revenue center, so it gets managed to a cost exposure and can be starved for resources.
Yes but Content Operations can sometimes be thought of as a "beginner" job for product managers, designers, engineers so it gets younger, less influential staffing which habitually rotates off after 1-2 years to a new project.
Yes but lack of diversity and misaligned incentives in senior leadership and teams can lead to an under-assessing of the true cost (to brand, to user experience) of "bad" content on the platform.
Because there are much better places to share porn than Twitter, Facebook and YouTube. And because algorithms are actually really good at detecting nudity. However, content created for sexual gratification that doesn't expressly have nudity involved is much tougher for platforms. Did I ever write about creating YouTube's fetish video policy? That was an interesting discussion…
Hope that helps folks understand these systems a bit more. If you have any questions, reach out to me on Twitter.
Hunter Walk is an investor at Homebrew, a seed-stage venture fund. He previously led consumer product management for YouTube, oversaw product and sales for Google's contextual ad business, and was a founding member of the product team that created virtual world Second Life.