Strolling Down Trolling Lane

The Progress and Freedom Foundation has a good post on how we already have adequate laws in place to deal with internet trolling. See Under-Appreciated Existing Legal Remedies for Trolling, Defamation and Other “Malwebolent” Invasions of Privacy.

The article summarizes some academic ideas as to how we can combat the “scourge” of trolling.

Frank Pasquale has argued that we ought to require Internet search engines to provide a “right of reply”–allowing someone to post a “reply” that would appear on a search engine next to content concerning them that they consider inaccurate or defamatory (essentially the “fairness doctrine” applied online). Others (one example) have argued for replacing Section 230 with something akin to the notice-and-takedown regime of copyright so that publishers’ immunity would be contingent on compliance with takedown notices. But Mark Lemley, an internet law guru who is representing the plaintiffs in the Autoadmit case, has argued that Section 230 should instead be “rationalized” along with other Internet safe harbors under a unified safe harbor drawn from current trademark law: “innocent infringers” would have immunity and would not be required to take down allegedly defamatory content, but plaintiffs could get courts to issue injunctions requiring intermediaries to take down content. What unites advocates of all these proposals is that, like Schwartz, they downplay or ignore the effectiveness of existing tort remedies and third-party subpoenas.

I have an idea that I’ve been floating, and I invite comments on it.

I agree that we already have enough existing laws. However, I also realize that employing those laws can often be an arduous and expensive process. Let’s face it, if someone defames you and you can’t track the person down without filing suit and issuing subpoenas, you very well may just grin and bear it.

Unfortunately, there are many fear-mongers out there. From those trying to sell their “reputation defense” services to those who simply have an anti-First Amendment agenda, there are a lot of little dogs nipping at the heels of Section 230. These fear-mongers want us to believe that like terrorists, the bird flu, and erotoxins, defamers lie around every corner just waiting to ruin our lives. Enough of these fear-mongers are oozing their misinformation on to daytime TV that Section 230 must eventually crack under the pressure.

Accordingly, it is important to entertain modifications to Section 230. However, they should be crafted with the lightest hand while holding the sharpest scalpel. How can we offer satisfaction to those who might be the victims of a cyber-smear campaign without creating a serious chilling effect that may shut down large corners of the marketplace of ideas?

Here is an idea, lifted from the ACPA’s in rem provision:

(A)A party who claims that (s)he has been defamed on the internet by an anonymous party on a website or other interactive computer service that enjoys the protections of 47 U.S.C. s 230, may file an in rem civil action against the allegedly defamatory content if:

1. the plaintiff alleges that the content is defamatory or invasive of the plaintiff’s right to privacy; and,

2. the court finds that the plaintiff-

(I) is not able to obtain in personam jurisdiction over a person who would have been a defendant in a civil action pertaining to the content; or

(II) through due diligence was not able to find a person who would have been a defendant in a civil action pertaining to the content by

(aa) sending a notice of the alleged violation and intent to proceed under this paragraph to the author of the offending content via email, postal mail, or by publishing notice of the action on the same forum where the content is found.
(bb) alternatively publishing notice of the action as the court may direct promptly after filing the action.

(B) The actions under subparagraph (A)(II) shall constitute service of process.

(C) In an in rem action under this paragraph, content shall be deemed to have its situs in the judicial district in which

(i) the website is is subject to personal jurisdiction; or,
(ii) if the website specifically consents, in the judicial district where the plaintiff resides

(D) If the author of the content has deemed it necessary to remain anonymous, the author may appear in the action anonymously in defense of his or her content after providing the court a document that will remain sealed establishing the author’s true identity. Such identity will not be revealed except upon a finding that the content is indeed tortious and actionable.

(E)
(i) The remedies in an in rem action under this paragraph shall be limited to a declaratory judgment that the content is deemed to be tortious and actionable to the extent that at trial, the plaintiff would have a very strong likelihood of success on the merits of the stated cause of action.

(ii) upon the court making such a finding, the website shall have 30 days in which to remove the content. A failure to adhere to this judgment shall result in the website losing its immunity under 47 U.S.C. s 230 for that particular content, and the plaintiff shall be given the right to amend the claim to file suit against the website as if it were the author of the tortious content.

(iii) No findings of law or fact in the in rem proceeding shall be given any greater than persuasive authority in any subsequent suit against the website.

That way, speech is not suppressed without a judicial determination that it is both capable of a defamatory meaning, a false statement of fact, and that it is at least likely to be proven false if there were a full-blown defamation trial. The plaintiff doesn’t need to hunt down the defendant, if (s)he doesn’t want to endure the expense or hassle of it, and the plaintiff can get the defamatory material taken down, (begin sarcasm) which is all defamation plaintiffs really want, right? (/end sarcasm)

This is obviously a brand new idea, so any critique or tweaks would be most welcome.

4 Responses to Strolling Down Trolling Lane

  1. How would an in rem action work against a message board comment? At least on BigLawBoard, “the thing” can be altered as many times as the anonymous poster likes. There are daily backups that go back a few days, but these would not catch edits that occur between backups. Likewise, search engine caches are updated on a periodic basis.

    Say I post something nasty about someone, then think better about it an hour later and go back and edit it out. Assuming Google didn’t cache my comment before I edited it, how does a plaintiff prove what it said? Is a screen shot and an affidavit sufficient?

    Consider a community account where two people post under the same pseudonym. Say that poster A writes something benign, only to have that comment turned into something actionable by poster B. Even if you can track down the identities of all the posters sharing the community account, determining who posted what when is awfully difficult.

    Of course, you can clear a lot of this up by mandating an audit log of every post and every edit. While that’s technically possible, I can say that it would create a very significant burden on the development and maintenance of even an otherwise simple web forum.

  2. A good point. I was considering content as static.

    However, I think that if it *does* change, then that is ostensibly what the plaintiff would be seeking in the first place.

    I don’t think that mandating audit logs is a good idea. That would crush internet discussion boards, if not other sites.

  3. Mr. Hands says:

    I think though that the changes creates a problem. What if every time the plaintiff commenced an action the poster just changed/deleted the comment, but then posted a slightly altered version. The plaintiff would then be back to square one.

    Also, how would it work if the plaintiff won, the website removed the comment and then the anonymous poster then reposted a slightly altered version. Would the plaintiff then have to go back to court? Without actual damages against the site, it could be a never ending cycle.

  4. 12XU says:

    I like the idea. I think it’s an interesting place to start, but in my opinion, this lacks the teeth necessary to really fix CDA 230. An in rem suit for a declaratory judgment will be almost certainly too financially burdensome for most people to engage in. Maybe add that if the plaintiff prevails, the web admins will be liable for court costs and attorney fees. The way it is now, admins have no incentive to respond to plaintiffs requests. They can just sit back and wait for plaintiff to file suit for a declaratory judgment. If they knew that they would have to pay for the plaintiff’s litigation expenses, they might be inclined to weigh the merits of the case on their own before litigation ensues.

    In my opinion, the problem with CDA 230 is that it allows web administers to turn a blind eye to defamation and leave no recourse for those defamed without going to court first. If you provide a haven for trolls to trash and harass non-public figures and fail to remove offensive and false content when asked by the subject of the harassment to do so, you deserve to be sued. Perhaps if web admins knew they could be sued for monetary damages before plaintiff had to obtain a declaratory judgment, they would have an incentive to respond more diligently to complaints.