Yeah, could be. I'd like to see funding for the CBC maintained, and think it's so valuable to have a national broadcaster. At the same time, can I take a moment to bellyache about CBC radio? Some of their programming makes me a bit sad. Like having a show that simply repeats podcasts produced by other people, occasionally about American news stories. There's no reason for that to be on the airwaves in 2023. Programs like The Sunday Edition, and Writers and Company are gone, and it seems like a lot of the spoken word programs are in the weeds on fringe issues. It's just not the first preset on my dial anymore. And it's too bad because I used to really look forward to tuning in. Maybe radio is dying, I don't know.
voluble
they always seem a bit too reluctant to lead
I know it's a cliche, but I wonder if that's a 'Canadian' thing.
Good point & thanks for the post.
Won't different media companies have different red lines for what gets blacklisted and what doesn't, and wouldn't that be at best confusing, and worse, a political quagmire? Let's say (after the fediverse gains some momentum) an influential politician uses a self hosted instance to exclusively communicate their policies, and as a home for their political base, but leaves the server moderated well below an acceptable level on purpose. Are media companies obligated to defederate it? Will they? Seems like there is a whole new world of trade-offs and grey areas here.
Even if we assume troll instances are easily and effectively defederated and can't be spun up faster than they can be collectively blocked. Other than volunteer moderation, what stops an ocean of trolls from flooding better-known, federated instances?
Just want to make clear - I'm 100% an advocate for the fediverse, I'm here because I think it's awesome right now. I just worry about the chances for it to get drowned in troll/malicious/corporate material as it grows in popularity, and I'm trying to think if there are any ways to stem that tide. Seems reasonable to expect that it will start coming.
Yep, that is huge.
I didn't know about the BBC thing, that's a pretty big deal for the fediverse. A question though, in the linked BBC article, it seems like they're heavily relying on moderation to come from the home instance of anyone who posts a reply to a BBC post. If a self hosted troll server decides to start aggressively spamming these media accounts, or posts illegal material as replies to their posts, what could a media organization do to stop it? Is there any protection against say a wide network of troll servers working together?
Traditional social media at least theoretically has a better ability to shut this sort of activity down because they can see the whole picture of user activity and use algorithms to discover and ban bots. I worry that decentralization itself will become an attack vector for malicious activity.
Would be curious to know your thoughts about fragmentation - I'm not an nfl guy but I follow hockey and mma, and I think there is a shared issue, where, a handful of communities will crop up across multiple instances, all essentially following the same subject matter and material. Seems to me this will make for smaller, less active conversations that are spread around, instead of having one canonical hub where fans can go and you can get a robust community and lots of people chiming in. What do you think? Is this actually an issue? If it is, is it just a side-effect of decentralization that fans will have to deal with?
Cheers, thanks for modding. I've never done it but I imagine it's hard work.
All sweeping generalizations are bad!
I'm not 'standing up' for anything in particular and I don't mean to express anything here as an outcome that I want, I'm just thinking out loud and wondering where this all goes.
I understand that you really dislike AI, and feel that what AI makes and what humans make will always and forever be categorically different in some important way. I can see where you're coming from and a fruitful debate could be had there I think. I'm less sure than you are that AI can be tamed or bottled or destroyed. I think it's something that is here to stay and will continue to develop whether we like the outcomes or not. As open source AI improves and gets into the hands of the average person, I don't see how it's possible to put effective limits on this technology. Geriatric politicians won't do it, this is painfully obvious. Complaining (or advocating, which you could note I have not done here) in a small corner of an obscure comment thread on an obscure platform won't make a difference either.
I get the sense that you believe there is a moral responsibility for everybody commenting in an online forum to call for the complete destruction of AI, and anything short of that is somehow morally wrong. I don't understand that view at all. We're musing into the void here and it has absolutely no effect on what will actually occur in the AI space. I'm open to changing my mind if you have a case to make about there being some moral responsibility to wave the flag that you want to wave, on an online forum, and that wondering aloud is somehow impermissible.
Sure, and I think the kinds of things that you mention might come to pass. But for the record I didn't say that I thought it was good. It's just a direction I think these things could go. There's no putting this genie back in the bottle. The view that AI will remain in the background, or merely solve problems that we already have solutions for, or cannot possibly bear on the character and influence of human creativity, I think underestimates the possibilities for change that this still very young technology could bring. That's all I'm saying, sorry if that wasn't clear.
"Thousands of customers were told there is nothing wrong with their car" by advisers who had never run diagnostics, Reuters quoted a source as saying. Advisers offered tips to customers on how to increase their mileage by changing driving habits
Holy shit, can you imagine being told this by your vehicle manufacturer when requesting a service appointment?
I kind of agree with your premise about education vs training. But, I know someone who just finished a PhD thesis on an aspect of social media. These people are out there & could probably establish rapport without much friction. And at the end of the day, I'd say it's better if these projects are spearheaded by people who have some wisdom about the waters that we're going out into. Though, yeah, I share your pessimism more generally.