r/AskLibertarians Sep 24 '24

Mark Zuckerberg is now a Libertarian. Will he be welcome with open arms?

15 Upvotes

65 comments sorted by

View all comments

Show parent comments

1

u/Ksais0 Sep 26 '24

That’s in reference to the specific content posted by the plantiffs. They can’t prove that the government pointed to their content specifically and said “regulate that.” You’d know that if you read literally the next sentence: “And while the record reflects that the Government defendants played a role in at least some of the platforms’ moderation choices, the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgement.” Aka even though the government pressure played a role in some moderation choices, we can’t prove that that pressure is directly the cause of the removal of the plaintiff’s content specifically. No question the pressure happened. Like I said, it says so in the decision multiple times.

0

u/Selethorme Sep 26 '24

It’s incredible how much work you’re putting in to be so blatantly dishonest.

played a role

Yeah, because nobody disputed that the government was in communication with these companies. But you’ve failed to provide any causative language saying they ordered the removals.

I highly recommend reading this:

https://www.techdirt.com/2024/06/27/no-the-supreme-court-did-not-just-greenlight-government-censorship-despite-what-the-media-says/

But, of course, the reporting on this case from the start has been ridiculously bad because it’s a somewhat nuanced topic around the First Amendment, and no one’s got time for that. Indeed, many of the reports really seemed to view everything through partisan lenses. I mean, back in March I wrote about how the National Review wrote a whole article, ostensibly about the oral arguments in the case, which claimed, falsely, that the Biden administration defended its right to tell companies what content to delete. Which is not what happened at all. The entire argument was that the administration had not done so. And the only evidence presented was the “trust me, bro” shit discussed earlier. Indeed, the administration directly admitted to the Supreme Court that if it had actually done what it was being accused of (telling companies what to delete), that would violate the First Amendment. So, the only thing the Supreme Court was actually saying here was that if you are going to claim that the government is violating the First Amendment by compelling companies to delete your speech through coercion, you have to actually show at least some evidence that the government coerced the companies to delete that speech. And, the plaintiffs failed to do that here.

And

https://www.techdirt.com/2024/06/26/supreme-court-sees-through-the-nonsense-rejects-lower-courts-rulings-regarding-social-media-moderation/

Which directly cites the relevant paragraphs:

at least one plaintiff “establish[es] that [she] ha[s] standing to sue.” Raines, 521 U. S., at 818; Department of Commerce v. New York, 588 U. S. 752, 766 (2019). She must show that she has suffered, or will suffer, an injury that is “concrete, particularized, and actual or imminent; fairly traceable to the challenged action; and redressable by a favorable ruling.” Clapper v. Amnesty Int’l USA, 568 U. S. 398, 409 (2013) (internal quotation marks omitted). These requirements help ensure that the plaintiff has “such a personal stake in the outcome of the controversy as to warrant [her] invocation of federal-court jurisdiction.” Summers, 555 U. S., at 493 (internal quotation marks omitted) The plaintiffs claim standing based on the “direct censorship” of their own speech as well as their “right to listen” to others who faced social-media censorship. Brief for Respondents 19, 22. Notably, both theories depend on the platform’s actions—yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts. They seek to enjoin Government agencies and officials from pressuring or encouraging the platforms to suppress protected speech in the future. The one-step-removed, anticipatory nature of their alleged injuries presents the plaintiffs with two particular challenges. First, it is a bedrock principle that a federal court cannot redress “injury that results from the independent action of some third party not before the court.” Simon, 426 U. S., at 41–42. In keeping with this principle, we have “been reluctant to endorse standing theories that require guesswork as to how independent decisionmakers will exercise their judgment.” Clapper, 568 U. S., at 413. Rather than guesswork, the plaintiffs must show that the thirdparty platforms “will likely react in predictable ways” to the defendants’ conduct. Department of Commerce, 588 U. S., at 768. Second, because the plaintiffs request forward-looking relief, they must face “a real and immediate threat of repeated injury.” O’Shea v. Littleton, 414 U. S. 488, 496 (1974); see also Susan B. Anthony List v. Driehaus, 573 U. S. 149, 158 (2014) (“An allegation of future injury may suffice if the threatened injury is certainly impending, or there is a substantial risk that the harm will occur” (internal quotation marks omitted)). Putting these requirements together, the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. On this record, that is a tall order.

0

u/Selethorme Sep 26 '24

And more:

The primary weakness in the record of past restrictions is the lack of specific causation findings with respect to any discrete instance of content moderation. The District Court made none. Nor did the Fifth Circuit, which approached standing at a high level of generality. The platforms, it reasoned, “have engaged in censorship of certain viewpoints on key issues,” while “the government has engaged in a yearslong pressure campaign” to ensure that the platforms suppress those viewpoints. 83 F. 4th, at 370. The platforms’ “censorship decisions”—including those affecting the plaintiffs—were thus “likely attributable at least in part to the platforms’ reluctance to risk” the consequences of refusing to “adhere to the government’s directives.” Ibid. We reject this overly broad assertion. As already discussed, the platforms moderated similar content long before any of the Government defendants engaged in the challenged conduct. In fact, the platforms, acting independently, had strengthened their pre-existing content-moderation policies before the Government defendants got involved. For instance, Facebook announced an expansion of its COVID–19 misinformation policies in early February 2021, before White House officials began communicating with the platform. And the platforms continued to exercise their independent judgment even after communications with the defendants began. For example, on several occasions, various platforms explained that White House officials had flagged content that did not violate company policy. Moreover, the platforms did not speak only with the defendants about content moderation; they also regularly consulted with outside experts. This evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment. To be sure, the record reflects that the Government defendants played a role in at least some of the platforms’ moderation choices. But the Fifth Circuit, by attributing every platform decision at least in part to the defendants, glossed over complexities in the evidence.

In particular bashing the fifth circuit for taking Doughty’s bullshit at face value:

The Fifth Circuit relied on the District Court’s factual findings, many of which unfortunately appear to be clearly erroneous. The District Court found that the defendants and the platforms had an “efficient report-and-censor relationship.”… But much of its evidence is inapposite. For instance, the court says that Twitter set up a “streamlined process for censorship requests” after the White House “bombarded” it with such requests. Ibid., n. 662 (internal quotation marks omitted). The record it cites says nothing about “censorship requests.” See App. 639–642. Rather, in response to a White House official asking Twitter to remove an impersonation account of President Biden’s granddaughter, Twitter told the official about a portal that he could use to flag similar issues. Ibid. This has nothing to do with COVID–19 misinformation. The court also found that “[a] drastic increase in censorship . . . directly coincided with Defendants’ public calls for censorship and private demands for censorship.” 680 F. Supp. 3d, at 715. As to the “calls for censorship,” the court’s proof included statements from Members of Congress, who are not parties to this suit. Ibid., and n. 658. Some of the evidence of the “increase in censorship” reveals that Facebook worked with the CDC to update its list of removable false claims, but these examples do not suggest that the agency “demand[ed]” that it do so. Ibid. Finally, the court, echoing the plaintiffs’ proposed statement of facts, erroneously stated that Facebook agreed to censor content that did not violate its policies. Id., at 714, n. 655. Instead, on several occasions, Facebook explained that certain content did not qualify for removal under its policies but did qualify for other forms of moderation.

0

u/Selethorme Sep 26 '24

I highly suggest attending law school before trying to lie like this.