As Facebook continues to try to deal with the rise of violent and extremist content posted to its platform, the company is implementing a new tactic: empower (or outsource) outside groups to be watchdogs. Its latest program, launched in the U.K., will fund and train local non-government groups to monitor and respond to extremist content, even giving them their own platform to communicate directly with the company, Reuters reports.
Facebook—and other large tech companies like Alphabet—has been under fire lately from European governments over its inability to curb this sort of abusive content. The new program works alongside its artificial intelligence efforts, which aim to use technology to automatically take down flagged posts, as well as Facebook’s army of outsourced content moderators around the globe.
Yesterday, the Information published a report documenting allegations from six women who accused venture capitalist Justin Caldbeck of making unwanted sexual advances toward them, from groping to inappropriate text messages. Barely 24 hours later, Caldbeck has announced he is taking an “indefinite leave of absence” and will be getting professional counseling.
“The power dynamic that exists in venture capital is despicably unfair,” he said in a statement provided to Fast Company. “The gap of influence between male venture capitalists and female entrepreneurs is frightening, and I hate that my behavior played a role in perpetrating a gender-hostile environment. It is outrageous and unethical for any person to leverage a position of power in exchange for sexual gain, it is clear to me now that that is exactly what I’ve done.”
Could this be a harbinger of things to come? As Fast Company‘s Ruth Reader posited recently, perhaps this is the year companies will stop giving workplace sexism a pass.