rook

joined 1 year ago
[–] [email protected] 9 points 2 months ago

We already have ghostwriters. Should be straightforward to conjure up an AI generated face who will look nicer than regular human authors and never say anything awkward, then have a textual genai system where you feed a few cheap ghostwritten works into one end and it regurgitates a franchise of arbitrary length, mimicking the style of someone who will have difficulty challenging the publisher in court. Bring in a new human ghostwriter every now and then to freshen up the training data if needs be. You might need to still employ some editors, but rebrand them as “prompt refiners” and give them shittier contracts.

Honestly, stuff like the MCU could be run like this already for all I know, and if it isn’t, I wonder how long it would take for someone to notice if they switched to this model?

[–] [email protected] 17 points 2 months ago

You would choose your nationality like you choose your broadband provider. You would become a citizen of the franchised cyber statelet of your choice.

Ahh, I can’t wait.

Notification of planned maintenance 

Rule of law will be suspended between midnight and 6am 
pacific time to upgrade the constitution. We apologise for 
any inconvenience or loss of life.
[–] [email protected] 11 points 3 months ago (3 children)

I am horrified and somewhat embarrassed that I understood what all that stuff was.

[–] [email protected] 12 points 3 months ago

It has been suggested, either on this site or by people who pop up here a lot, that the idiosyncratic (eg. Fucking Weird) design of hoon and nock was a deliberate attempt to build something akin to cult mysteries, where not just anyone could grasp it and the initiates had powers that the ignorant outsiders would not, etc etc.

Unfortunately, whilst he’s clearly not stupid, Yarvin isn’t nearly as clever as he thinks he is, and has ended up producing a load of unwieldy cryptic nonsense that no one can work with. I expect this applies to other things he does, too.

[–] [email protected] 7 points 3 months ago (6 children)

Valsorda was on mastodon for a bit (in ‘22 maybe?) and was quite keen on it , but left after a bunch of people got really pissy at him over one of his projects. I can’t actually recall what it even was, but his argument was that people posted stuff publicly on mastodon, so he should be able to do what he liked with those posts even if they asked him not to. I can see why he might not have a problem with LLMs.

Anyone remember what he was actually doing? Text search or network tracing or something else?

[–] [email protected] 16 points 3 months ago

One to keep an eye on… you might all know this already, but apparently Mozilla has an “add ai chatbot to sidebar” in Firefox labs (https://blog.nightly.mozilla.org/2024/06/24/experimenting-with-ai-services-in-nightly/ and available in at least v130). You can currently choose from a selection of public llm providers, similar to the search provider choice.

Clearly, Mozilla has its share of AI boosters, given that they forced “ai help” onto MDN against a significant amount of protest (see https://github.com/mdn/yari/issues/9230 from last July for example) so I expect this stuff to proceed apace.

This is fine, because Mozilla clearly has time and money to spare with nothing else useful they could be doing, alternative browsers are readily available and there has never been any anti-ai backlash to adding this sort of stuff to any other project.

[–] [email protected] 7 points 3 months ago (1 children)

Looking at both cohost and tumblr, I don’t think the funder has an asset that’s worth very much.

[–] [email protected] 10 points 3 months ago (9 children)

Cohost going readonly at the end of this month, and shutting down at the end of the year: https://cohost.org/staff/post/7611443-cohost-to-shut-down

Their radical idea of building a social network that did not require a either VC funding or large amounts of volunteer labour has come to a disappointing, if not entirely surprising end. Going in without a great idea on how to monetise the thing was probably not the best strategy as it turns out.

[–] [email protected] 7 points 3 months ago

One or more of the following:

  • they don’t bother with ai at all, but pretending they do helps with sales and marketing to the gullible
  • they have ai but it is totally shit, and they have to mechanical turk everything to have a functioning system at all
  • they have shit ai, but they’re trying to make it better and the humans are there to generate test and training data annotations
[–] [email protected] 3 points 3 months ago

When we hit AGI, if we can continue to keep open source models, it will truly take the power of the rich and put it in the hands of the common person.

Setting aside the “and then a miracle occurs” bit, this basically seems to be “rich people get to have servants and slaves… what if we democratised that?”. Maybe AGI will invent a new kind of ethics for us.

But the rich can multiply that effort by however many people they can afford.

If the hardware to train and run what currently passes for AI was cheap and trivially replicable, Jensen Huang wouldn’t be out there signing boobs.

[–] [email protected] 12 points 3 months ago (4 children)

Sounds like he’s been huffing too much of whatever the neoreactionaries offgas. Seems to be the inevitable end result of a certain kind of techbro refusing to learn from history, and imagining themselves to be some sort of future grand vizier in the new regime…

[–] [email protected] 17 points 3 months ago (2 children)

Interview with the president of the signal foundation: https://www.wired.com/story/meredith-whittaker-signal/

There’s a bunch of interesting stuff in there, the observation that LLMs and the broader “ai” “industry” wee made possible thanks to surveillance capitalism, but also the link between advertising and algorithmic determination of human targets for military action which seems obvious in retrospect but I hadn’t spotted before.

But in 2017, I found out about the DOD contract to build AI-based drone targeting and surveillance for the US military, in the context of a war that had pioneered the signature strike.

What’s a signature strike?

A signature strike is effectively ad targeting but for death. So I don’t actually know who you are as a human being. All I know is that there’s a data profile that has been identified by my system that matches whatever the example data profile we could sort and compile, that we assume to be Taliban related or it’s terrorist related.

view more: ‹ prev next ›