FPCH Admin allheart55 Cindy E Posted May 17, 2018 FPCH Admin Posted May 17, 2018 On Tuesday, Facebook took yet another stab at transparency in these days of users’ and politicians’ outrage. It came in the form of the first release of the company’s Community Standards Enforcement Report, and it was stuffed with the type of detail that Mark Zuckerberg told so many Congresspeople he’d need to get back to them on when he was first lightly sautéed and then flame-grilled in two days of testimony. For years, Facebook has had Community Standards that explain “what stays up and what comes down.” Last month, for the first time, Facebook published the internal guidelines it follows to enforce those standards. Tuesday’s release of the first ever Community Standards Enforcement Report is a way to hand over the numbers that have resulted from that enforcement. With that information in hand, Facebook’s thinking goes, we can all judge for ourselves how it’s doing when it comes to getting rid of all those fake accounts and their spammy output… And posts with nudity. Or sexual activity. Or hate speech. Or terrorist propaganda. Guy Rosen, Facebook’s vice president of product management, said in the post that the company’s disabled about 583 million fake accounts during the first three months of this year, or between 3% and 4% of monthly active users. It’s taken down nearly 1.3 billion over the past six months. The majority of fake accounts were blocked within minutes of registration, Facebook said, touting its artificial intelligence (AI) auto-flag, auto-destroy technologies. On a daily basis, it crushes millions of fake accounts before they ever hatch. Take down the accounts, and you’re on the road to wiping out the spam they spew, 837 million pieces of which Facebook found and flagged in Q1 2018. Nearly 100% of that spam was discovered and flagged before anyone reported it, Facebook says. Taking down fake accounts is important not just to fight spam. It’s also crucial for battling fake news, misinformation, bad ads and scams. For example, following Facebook’s F8 developer conference, the company said that it’s started to use AI to automatically sniff out accounts linked to financial scams. On Tuesday, Facebook took yet another stab at transparency in these days of users’ and politicians’ outrage. It came in the form of the first release of the company’s Community Standards Enforcement Report, and it was stuffed with the type of detail that Mark Zuckerberg told so many Congresspeople he’d need to get back to them on when he was first lightly sautéed and then flame-grilled in two days of testimony. For years, Facebook has had Community Standards that explain “what stays up and what comes down.” Last month, for the first time, Facebook published the internal guidelines it follows to enforce those standards. Tuesday’s release of the first ever Community Standards Enforcement Report is a way to hand over the numbers that have resulted from that enforcement. With that information in hand, Facebook’s thinking goes, we can all judge for ourselves how it’s doing when it comes to getting rid of all those fake accounts and their spammy output… And posts with nudity. Or sexual activity. Or hate speech. Or terrorist propaganda. Guy Rosen, Facebook’s vice president of product management, said in the post that the company’s disabled about 583 million fake accounts during the first three months of this year, or between 3% and 4% of monthly active users. It’s taken down nearly 1.3 billion over the past six months. The majority of fake accounts were blocked within minutes of registration, Facebook said, touting its artificial intelligence (AI) auto-flag, auto-destroy technologies. On a daily basis, it crushes millions of fake accounts before they ever hatch. Take down the accounts, and you’re on the road to wiping out the spam they spew, 837 million pieces of which Facebook found and flagged in Q1 2018. Nearly 100% of that spam was discovered and flagged before anyone reported it, Facebook says. Taking down fake accounts is important not just to fight spam. It’s also crucial for battling fake news, misinformation, bad ads and scams. For example, following Facebook’s F8 developer conference, the company said that it’s started to use AI to automatically sniff out accounts linked to financial scams. Numbers on other types of violative content: Facebook took down 21 million pieces of what it considers to be adult nudity and sexual activity in Q1 2018. It found 96% of titillating content before it was reported. Facebook estimates that out of every 10,000 pieces of content viewed on Facebook, just seven to nine views were of content that violated its adult nudity and pornography standards… which, by the way, have a history of head-scratching decisions. A few years ago, Facebook found itself having to clarify just what non grata “nudity” is. TL;DR: it has to do with the nuances of nipples. For graphic violence, Facebook took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018, 86% of which was identified by AI before users reported it to Facebook. Hate speech is a tough one, not just for Facebook but also for Twitter, YouTube and other platforms. Facebook says its technology “still doesn’t work that well and so it needs to be checked by our review teams.” It removed 2.5 million pieces of hate speech in Q1 2018, 38% of which was automatically flagged before it saw the light of day. Rosen echoed what Zuckerberg said at F8 recently: “we have a lot of work to do to prevent abuse.” For example, spotting hate speech is complex, as Rosen described in a detailed post following F8. From Tuesday’s post: Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. AI also needs to be trained with large amounts of data to recognize meaningful patterns of behavior. Facebook doesn’t always have that much training data, particularly in less widely used languages. All in all, the report is Facebook’s latest bid to pull itself out of the post-Cambridge Analytica mess it got itself into… …a user data-sharing fiasco that’s already chalked up two more misbehaving apps: besides Cambridge Analytica, we got a second app posing as a research lamb that turned out to be selling our data to the marketing wolves, and then, this week, we got yet another research app that left users’ intimates out on the laundry line, unsecured, for four years. But Facebook’s got much, much more to dig itself out of besides the app agonies. For example, one imagines that many of the questions that this report tries to answer have to do with the 2016 US presidential election manipulation spree, replete as it was with Russian trollery, fake news and political ads illegally purchased by overseas entities. And that’s just part of a more overarching question: namely, is Facebook now too powerful? And can it even keep up with what it calls “sophisticated adversaries who continually change tactics to circumvent our controls?” content: Facebook took down 21 million pieces of what it considers to be adult nudity and sexual activity in Q1 2018. It found 96% of titillating content before it was reported. Facebook estimates that out of every 10,000 pieces of content viewed on Facebook, just seven to nine views were of content that violated its adult nudity and pornography standards… which, by the way, have a history of head-scratching decisions. A few years ago, Facebook found itself having to clarify just what non grata “nudity” is. TL;DR: it has to do with the nuances of nipples. For graphic violence, Facebook took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018, 86% of which was identified by AI before users reported it to Facebook. Hate speech is a tough one, not just for Facebook but also for Twitter, YouTube and other platforms. Facebook says its technology “still doesn’t work that well and so it needs to be checked by our review teams.” It removed 2.5 million pieces of hate speech in Q1 2018, 38% of which was automatically flagged before it saw the light of day. Rosen echoed what Zuckerberg said at F8 recently: “we have a lot of work to do to prevent abuse.” For example, spotting hate speech is complex, as Rosen described in a detailed post following F8. From Tuesday’s post: Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. AI also needs to be trained with large amounts of data to recognize meaningful patterns of behavior. Facebook doesn’t always have that much training data, particularly in less widely used languages. All in all, the report is Facebook’s latest bid to pull itself out of the post-Cambridge Analytica mess it got itself into… …a user data-sharing fiasco that’s already chalked up two more misbehaving apps: besides Cambridge Analytica, we got a second app posing as a research lamb that turned out to be selling our data to the marketing wolves, and then, this week, we got yet another research app that left users’ intimates out on the laundry line, unsecured, for four years. But Facebook’s got much, much more to dig itself out of besides the app agonies. For example, one imagines that many of the questions that this report tries to answer have to do with the 2016 US presidential election manipulation spree, replete as it was with Russian trollery, fake news and political ads illegally purchased by overseas entities. And that’s just part of a more overarching question: namely, is Facebook now too powerful? And can it even keep up with what it calls “sophisticated adversaries who continually change tactics to circumvent our controls?” Source: Sophos Quote ~I know that you believe you understand what you think I said, but I'm not sure you realize that what you heard is not what I meant.~ ~~Robert McCloskey~~
FPCH Admin AWS Posted May 17, 2018 FPCH Admin Posted May 17, 2018 It is nice that are doing something about the fakes. Quote Off Topic Forum - Unlike the Rest
Recommended Posts