Intermarkets' Privacy Policy
Support


Donate to Ace of Spades HQ!



Recent Entries
Absent Friends
Bandersnatch 2024
GnuBreed 2024
Captain Hate 2023
moon_over_vermont 2023
westminsterdogshow 2023
Ann Wilson(Empire1) 2022
Dave In Texas 2022
Jesse in D.C. 2022
OregonMuse 2022
redc1c4 2021
Tami 2021
Chavez the Hugo 2020
Ibguy 2020
Rickl 2019
Joffen 2014
AoSHQ Writers Group
A site for members of the Horde to post their stories seeking beta readers, editing help, brainstorming, and story ideas. Also to share links to potential publishing outlets, writing help sites, and videos posting tips to get published. Contact OrangeEnt for info:
maildrop62 at proton dot me
Cutting The Cord And Email Security
Moron Meet-Ups


Texas MoMe 2024: 10/18/2024-10/19/2024 Corsicana,TX
Contact Ben Had for info





















« Quick Hits | Main | Sophisticated, Vaccinated Crowd Cafe »
September 27, 2021

The FaceBook Files: Wall Street Journal Expose Proves FaceBook Has Been Lying About Treating Users Equally, and Has Long Known Its Product Instagram Is "Toxic" for Teenaged Girls

This was a three-part series released last week, linked by Real Clear Investigations.

Part 1: FaceBook is lying about treating users equally. In fact, not only do they "blacklist" users they don't like, but they "whitelist" celebrities and politicians they like, insuring that they can't easily be censored, no matter how many of FaceBook's terms of service they violate.

If you've ever wondered how Joy Ann Reid never gets censored or "limited" no matter how much anti-vaccination conspiracy theorizing she engages in -- I mean, engaged in, prior to Joe Biden becoming, supposedly, "President" -- well, maybe Twitter has a "whitelist" similar to the one FaceBook employs to protect its favorites.

Mark Zuckerberg has publicly said Facebook Inc. allows its more than three billion users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.

In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.

The program, known as "cross check" or "XCheck," was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company's normal enforcement process, the documents show. Some users are "whitelisted" --rendered immune from enforcement actions--while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.

At times, the documents show,< XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook.

...


A 2019 internal review of Facebook's whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and "not publicly defensible."

Of course, this internal review was stamped "attorney-client privilege" so that it could never be subpoenaed by Congress or a court.

But it got leaked.

"We are not actually doing what we say we do publicly," said the confidential review. It called the company's actions "a breach of trust” and added: "Unlike the rest of our community, these people can violate our standards without any consequences."

...

This is the first in a series of articles based on those documents and on interviews with dozens of current and former employees.

At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter.

XCheck started out as a program that would immunize VIPs from automated moderation, shunting their posts up to to human beings who will moderate them.

But, in fact, FaceBook only reviews 10% at most of XCheck users' posts. They have decided that it's easier to just not review them at all, meaning that FaceBook's powerful, politically-favored VIPs are effectively "whitelisted" and will never be censored or punished, no matter what they write.


While the program included most government officials, it didn't include all candidates for public office, at times effectively granting incumbents in elections an advantage over challengers. The discrepancy was most prevalent in state and local races, the documents show, and employees worried Facebook could be subject to accusations of favoritism.

...

Facebook recognized years ago that the enforcement exemptions granted by its XCheck system were unacceptable, with protections sometimes granted to what it called abusive accounts and persistent violators of the rules, the documents show. Nevertheless, the program expanded over time, with tens of thousands of accounts added just last year.

In addition, Facebook has asked fact-checking partners to retroactively change their findings on posts from high-profile accounts, waived standard punishments for propagating what it classifies as misinformation and even altered planned changes to its algorithms to avoid political fallout.

"Facebook currently has no firewall to insulate content-related decisions from external pressures," a September 2020 memo by a Facebook senior research scientist states, describing daily interventions in its rule-making and enforcement process by both Facebook’s public-policy team and senior executives.

A December memo from another Facebook data scientist was blunter: "Facebook routinely makes exceptions for powerful actors."

The fake Oversight Board made 19 recommendations to FaceBook. FaceBook said they'd adopt 15 of them.

One of the four they refused to adopt concerned being transparent about its XCheck whitelisting program for VIPs.

Furthermore, FaceBook actually misled the Oversight Board about XCheck, making the Oversight Board's blessing of FaceBook worthless.

The XCheck documents show that Facebook misled the Oversight Board, said Kate Klonick, a law professor at St. John's University. The board was funded with an initial $130 million commitment from Facebook in 2019, and Ms. Klonick was given special access by the company to study the group's formation and its processes.


"Why would they spend so much time and money setting up the Oversight Board, then lie to it?" she said of Facebook after reviewing XCheck documentation at the Journal's request.

I don't know. Why would CBS set up an "independent investigation" of RatherGate which then whitewashed the affair and claimed CBS was blameless and Dan Rather just committed a minor boo-boo?

Why would The Lincoln Project fund an "independent" review by lawyers they were paying which then claimed there was no evidence the Lincoln Project knew that it was harboring a serial sex predator and groomer of underage boys, despite their being publicly-available evidence they did know?

Gee, if I can't trust sham "independent reviews" bought and paid for by crooked corporations to deliver a "not guilty" verdict on demand, who can I trust?

...

In a written statement, a spokesman for the board said it "has expressed on multiple occasions its concern about the lack of transparency in Facebook's content moderation processes, especially relating to the company's inconsistent management of high-profile accounts."

So the "Oversight Board" has repeatedly sent toothless Strongly Worded Letters and FaceBook has told them to pound sand.

Maybe they should change the name from "Oversight Board" to "Unsolicited, Unwanted Suggestions Board."

FaceBook has repeatedly lied to the public and to the supposedly-independent "Oversight Board" which will supposedly police its actions about XCheck. FaceBook has claimed it affects a "small amount" of users; in fact, at least 5.8 million users have been granted a carte blanche to violate FaceBook's supposed "rules" as they will.

Pitch for National Review:

The Conservative Case for Repeatedly Lying to Congress and Federal Agencies


Part 2: FaceBook has long known that its product Instagram is toxic for the mental health of users, particularly teenaged girls.

Instagram's own internal research established that Instagram causes a fair number of girls to want to kill themselves:


"Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse," the researchers said in a March 2020 slide presentation posted to Facebook's internal message board, reviewed by The Wall Street Journal. "Comparisons on Instagram can change how young women view and describe themselves."

For the past three years, Facebook has been conducting studies into how its photo-sharing app affects its millions of young users. Repeatedly, the company's researchers found that Instagram is harmful for a sizable percentage of them, most notably teenage girls.

"We make body image issues worse for one in three teen girls," said one slide from 2019, summarizing research about teen girls who experience the issues.

"Teens blame Instagram for increases in the rate of anxiety and depression," said another slide. "This reaction was unprompted and consistent across all groups."
Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram, one presentation showed.

40% of Instagram's users are 22 and younger, and FaceBook intends to get more and more teens (and tweens) to sign up for Instagram.

...

In public, Facebook has consistently played down the app's negative effects on teens, and hasn't made its research public or available to academics or lawmakers who have asked for it.

I bet you they've stamped that research "attorney-client material," too.

Note that the documents reveal that Zuckerberg had been personally briefed about some of this research.

But look at how Zuckerberg answered the question if his company had studied the effect of its products on children:

The research has been reviewed by top Facebook executives, and was cited in a 2020 presentation given to Mr. Zuckerberg, according to the documents. At a congressional hearing this March, Mr. Zuckerberg defended the company against criticism from lawmakers about plans to create a new Instagram product for children under 13. When asked if the company had studied the app's effects on children, he said, "I believe the answer is yes."

He believes -- but only believes, Senator -- that the company has done research on Instagram's effects on children.

I guess he forgot getting personally briefed on this research.

Congress has asked for its research in this area -- guess what Zuckerberg told Congress?

In August, Sens. Richard Blumenthal and Marsha Blackburn in a letter to Mr. Zuckerberg called on him to release Facebook's internal research on the impact of its platforms on youth mental health.

In response, Facebook sent the senators a six-page letter that didn’t include the company’s own studies. Instead, Facebook said there are many challenges with conducting research in this space, saying, "We are not aware of a consensus among studies or experts about how much screen time is 'too much,'" according to a copy of the letter reviewed by the Journal.

Facebook also told the senators that its internal research is proprietary and "kept confidential to promote frank and open dialogue and brainstorming internally."

Yeah I'm sure that's why.

A Facebook spokeswoman said the company welcomed productive collaboration with Congress and would look for opportunities to work with external researchers on credible studies.

Yeah I'm sure of that, too.

Part 3: FaceBook says it tried to make its toxic products "healthier." The result? They made FaceBook and Instagram more toxic, and more profitable, than ever.

This point is about FaceBook -- and Twitter, for that matter -- living and dying according to "engagement." About how to make their sites "sticky." About how to keep their users addicted.

And you almost certainly know what drives "engagement" on social media -- what is that keeps people refreshing and refreshing.

Anger. Outrage. Controversy.

Arguing with other people. Screaming, digitally, at other people.

This is the big reason I end every day with a Cafe post. I've been thinking of doing one in the middle of the day, too.

Politics is too emotionally draining. Well, emotionally stimulating (in the bad ways), and then emotionally draining.

Especially now, in the last days of America.

But I realized that the news was just too upsetting for people's mental and emotional health without having to commission studies about it.

FaceBook did commission studies. And they realized that their unholy "algorithm" was driving "content creators" to feed the algorithm with hate, conflict, and yes, even racism.

But they also realized that that conflict was good for their bottom line.

So they kept the algorithm pumping.

In the fall of 2018, Jonah Peretti, chief executive of online publisher BuzzFeed, emailed a top official at Facebook Inc. The most divisive content that publishers produced was going viral on the platform, he said, creating an incentive to produce more of it.

He pointed to the success of a BuzzFeed post titled "21 Things That Almost All White People are Guilty of Saying," which received 13,000 shares and 16,000 comments on Facebook, many from people criticizing BuzzFeed for writing it, and arguing with each other about race. Other content the company produced, from news videos to articles on self-care and animals, had trouble breaking through, he said.

Mr. Peretti blamed a major overhaul Facebook had given to its News Feed algorithm earlier that year to boost "meaningful social interactions," or MSI, between friends and family, according to internal Facebook documents reviewed by The Wall Street Journal that quote the email.

BuzzFeed built its business on making content that would go viral on Facebook and other social media, so it had a vested interest in any algorithm changes that hurt its distribution. Still, Mr. Peretti's email touched a nerve.

Facebook's chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.

Within the company, though, ,i>staffers warned the change was having the opposite effect, the documents show. It was making Faceboo'ss platform an angrier place.
Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.


"Our approach has had unhealthy side effects on important slices of public content, such as politics and news," wrote a team of data scientists, flagging Mr. Peretti's complaints, in a memo reviewed by the Journal. "This is an increasing liability," one of them wrote in a later memo.

They concluded that the new algorithm's heavy weighting of reshared material in its News Feed made the angry voices louder. "Misinformation, toxicity, and violent content are inordinately prevalent among reshares," researchers noted in internal memos.

...

Facebook employees also discussed the company's other, less publicized motive for making the change: Users had begun to interact less with the platform, a worrisome trend, the documents show.

Users were turning off FaceBook -- it had become a social media site for grandparents to keep up with their grandchildren. That's nice and all, but that wasn't good for FaceBook's profits.

So they changed the algorithm to reward the sharing of rage-stoking information.

Data scientists on that integrity team--whose job is to improve the quality and trustworthiness of content on the platform--worked on a number of potential changes to curb the tendency of the overhauled algorithm to reward outrage and lies. Mr. Zuckerberg resisted some of the proposed fixes, the documents show, because he was worried they might hurt the company's other objective--making users engage more with Facebook.

By the way, I begin to see why FaceBook is so aggressively censorious of conservatives: They want the sharing of outrage political clickbait. They want the "engagement" that produces.

But they also want to be seen as "trying to fix the problem."

And how do they do that? By cracking down on one and only side of the political aisle. Leftists can continue sharing outrage porn with each other, and bombing their conservative friends and family with it.

But when a conservative argues back, or shares his own articles to refute the leftist ones -- shadowbanned. "Limited." Banned from running ads.

FaceBook changed the algorithm to reward supposedly "Meaningful Social Interaction" (spoiler: there is very little meaningful social interaction on FaceBook) because the former staple of FaceBook "interaction" -- original posts by people sharing wedding pictures or the like -- were declining precipitously.

Thus, the algorithm was reworked so that posts that racked up reposts or responses (often angry) would literally get points for every like or repost or reaction. And the more points a post "earned," the more widely it would be spread.

You know what can really rack up a lot of "Meaningful Social Interaction" points in a hurry?

Articles explaining why white people are terrible and have fundamentally evil DNA. Those'll get lots of points!


And it worked: Changing FaceBook from a "post pictures of your dog site" into a "bait your relatives with divisive extremist political messaging site" did increase the all-important "MSI" metrics.

From a business perspective, it worked. As Facebook predicted, time spent on the platform declined, but the effort to maximize interactions between users slowed the free fall in comments, and mostly improved the all-important metric of "daily active people" using Facebook, according to tests run in August 2018, internal memos show.

...

Brad Parscale, who was the digital strategy leader for Donald Trump’s 2016 presidential campaign, and boasted that Facebook is where Mr. Trump won the election, said he began to notice changes to the algorithm as early as mid-2017, when the performance of political videos began to decline.

"'Healthy' is a cover word for 'We need to reduce conservative and Trump video spread,'" he said in an interview for this article.

That's what I said earlier -- FaceBook kept the divisive messaging from the left -- that really drives up those MSI scores -- but, to placate critics (especially those in the Democrat Party, which is always threatening to take action against this Tolerated Illegal Monopoly), they clamped down on anything they considered "divisive" from the right.


digg this
posted by Ace at 06:30 PM

| Access Comments




Recent Comments
Anonosaurus Wrecks, Eaten By Cannibals[/s] [/b] [/i]: "Interesting Biden /Harris ad. https://youtu.be/ ..."

whig: "If you try to time stuff, treat it as fun money, o ..."

It's me donna: "395 Late wife tried to cover both outcomes; loaded ..."

mrp: "It is written, the early bird terrorizes the worm. ..."

Sponge - F*ck Joe Biden: "[i]--- Silly deplorable, we'll just take out mo ..."

ShainS -- Blood-Bath-and-Beyond angel investor [/b][/i][/s][/u]: "Too bad Biden's uncle was Naval aviator with rank ..."

Sponge - F*ck Joe Biden: "Strike...... ..."

Jeff Spicoli: "[i] If you try to time stuff, treat it as fun mon ..."

whig: "393 I feel a sense of obligation toward Bill Krist ..."

ShainS -- Blood-Bath-and-Beyond angel investor [/b][/i][/s][/u]: "Will we even be able to service a $54 trillion dol ..."

Deplorable Ian Galt: "Some of my assets allocation is in SPY, and a few ..."

San Franpsycho: "It falls to us to rebuild a sense of obligation to ..."

Recent Entries
Search


Polls! Polls! Polls!
Frequently Asked Questions
The (Almost) Complete Paul Anka Integrity Kick
Top Top Tens
Greatest Hitjobs

The Ace of Spades HQ Sex-for-Money Skankathon
A D&D Guide to the Democratic Candidates
Margaret Cho: Just Not Funny
More Margaret Cho Abuse
Margaret Cho: Still Not Funny
Iraqi Prisoner Claims He Was Raped... By Woman
Wonkette Announces "Morning Zoo" Format
John Kerry's "Plan" Causes Surrender of Moqtada al-Sadr's Militia
World Muslim Leaders Apologize for Nick Berg's Beheading
Michael Moore Goes on Lunchtime Manhattan Death-Spree
Milestone: Oliver Willis Posts 400th "Fake News Article" Referencing Britney Spears
Liberal Economists Rue a "New Decade of Greed"
Artificial Insouciance: Maureen Dowd's Word Processor Revolts Against Her Numbing Imbecility
Intelligence Officials Eye Blogs for Tips
They Done Found Us Out, Cletus: Intrepid Internet Detective Figures Out Our Master Plan
Shock: Josh Marshall Almost Mentions Sarin Discovery in Iraq
Leather-Clad Biker Freaks Terrorize Australian Town
When Clinton Was President, Torture Was Cool
What Wonkette Means When She Explains What Tina Brown Means
Wonkette's Stand-Up Act
Wankette HQ Gay-Rumors Du Jour
Here's What's Bugging Me: Goose and Slider
My Own Micah Wright Style Confession of Dishonesty
Outraged "Conservatives" React to the FMA
An On-Line Impression of Dennis Miller Having Sex with a Kodiak Bear
The Story the Rightwing Media Refuses to Report!
Our Lunch with David "Glengarry Glen Ross" Mamet
The House of Love: Paul Krugman
A Michael Moore Mystery (TM)
The Dowd-O-Matic!
Liberal Consistency and Other Myths
Kepler's Laws of Liberal Media Bias
John Kerry-- The Splunge! Candidate
"Divisive" Politics & "Attacks on Patriotism" (very long)
The Donkey ("The Raven" parody)
Powered by
Movable Type 2.64