“To the Feds, I'll keep this short, because I do respect what you do for our country. To save you a lengthy investigation, I state plainly that I wasn't working with anyone. This was fairly trivial: some elementary social engineering, basic CAD, a lot of patience. The spiral notebook, if present, has some straggling notes and To Do lists that illuminate the gist of it. My tech is pretty locked down because I work in engineering so probably not much info there. I do apologize for any strife of traumas but it had to be done. Frankly, these parasites simply had it coming. A reminder: the US has the #1 most expensive healthcare system in the world, yet we rank roughly #42 in life expectancy. United is the [indecipherable] largest company in the US by market cap, behind only Apple, Google, Walmart. It has grown and grown, but as our life expectancy? No the reality is, these [indecipherable] have simply gotten too powerful, and they continue to abuse our country for immense profit because the American public has allwed them to get away with it. Obviously the problem is more complex, but I do not have space, and frankly I do not pretend to be the most qualified person to lay out the full argument. But many have illuminated the corruption and greed (e.g.: Rosenthal, Moore), decades ago and the problems simply remain. It is not an issue of awareness at this point, but clearly power games at play. Evidently I am the first to face it with such brutal honesty.”
Post got removed in .world for not being a "news source" even though Klippenstein is definitely a very established independent journalist, so trying again here I guess.
Oh, the Russians do that all the time and get rumbled for it, because news sites have an accountability level that blog sites do not.
See:
https://www.npr.org/2024/06/06/g-s1-2965/russia-propaganda-deepfakes-sham-websites-social-media-ukraine
https://www.cybercom.mil/Media/News/Article/3895345/russian-disinformation-campaign-doppelgnger-unmasked-a-web-of-deception/
Exactly, so if one of those articles was posted, how would you tell it was disinformation? You'd look at the article, see the name of the outlet/website, Google it, and it would either pop up with results saying it's a Russian disinformation campaign, or would have no results online if it was new since it was just created and hasn't been reported on.
Now imagine the same scenario, but it's a link to a substack based article. In order to check if it was disinfirmation, you'd look up the name of the outlet it claims to be, and it would either pop up with results about it being misninformation or have no results about it online.
In either case the effort to check if it's disinfo is basically identical and the same amount of effort.
If instead of straight up disinfo you're worried about too many blogs being posted that aren't news, then all you'd need to do to check if it was news or not was just read a bit of the linked article, same as if you wanted to check if a random NYT article, for example, was an opinion piece or not.
So again, my real question is what about substack specifically makes the actual process of moderation more difficult?
If a substack article is posted it's not too hard to verify if it's legit, and you can even be more strict about what constitutes a valid substack link compared to what constitutes a valid "regular" news link, which I think makes sense to do. The number of substack articles posted doesn't really seem like an issue either, since like I said barely any seem to be posted and removed each week. And either way if a substack blog is posted you either need to know and recognize the URL, which at that point you should also know whether the URL is for a blog or actual reporting that just happens to use substack, or if you don't know the URL you need to open the link to check anyway, so why not spend maybe an extra minute to see if it's legit first?
In most cases, it's easy enough to spot the disinfo with a simple google search or a domain registry check.
We had one in World that was an African news site, and my initial reaction was "Oh, cool, we don't have enough African representation!"
But then looking at it, it was a TOTAL cipher. No history, nobody linking to it, nothing.
But the weird part was, the news DID check out, it was legit, verifable reporting.
It was only when I searched exact phrases that I saw they were just copy/pasting from other news sites with zero attribution.
But again that's my point. The amount of effort you had to put into determining whether the news source was valid was fairly high the case of the African news site. But if that was published on substack instead, the amount of effort would be the exact same, you'd still need to look up the site and see that it had no history. You'd need to look up the phrases, and see that they were copy pasted from other articles. Nothing about that site would have been any different in terms of moderation if it were substack based instead.
And like you said, in most cases it's easy enough to spot disinfo with a google search or two, or checking the domain. But that would be true with substack too, you could to the exact same check you do for those sites for substack ones. Something like kenklippenstein.com is a unique domain, and should check out in the domain registry if you check. And if you google his name, his wikipedia article will show up and confirm he is a reputable independent journalist who posts on his substack page.
So if you're willing to expend that effort on moderating other sites, blocking substack specifically is nonsensical imo. You've already admitted the amount of work you're willing to put into verifying news sites which were previously unknown to you is fairly high, which is good. I respect the fact that you want to thoroughly investigate a site before declaring it unreliable. But if the acceptable amount of work is already such a high threshold, why is substack different?
Whether an article is on substack or not the process of checking it is the same. You can do a domain registry check, you can google the author and the name of the publication, you can copy segments from the article into google to see if they're stolen. Nothing about the article being published on substack changes the moderation workload compared to any other site.
Like I said, my core question is what about substack specifically makes the actual process of moderation more difficult? That's the part I don't understand about your reasoning behind the ban. All of the examples of moderation you've given me so far just seem to reinforce my argument, that substack being banned is illogical, and choosing to allow it would not have a noticeable effect on moderation while allowing a wider variety of sources and independent journalists to be shared.