VoterFrog

joined 1 year ago
[–] VoterFrog 1 points 23 hours ago* (last edited 23 hours ago) (1 children)

Brother, if you can't even get a sizable chunk of people to join you now you sure as fuck aren't coming out of an armed revolution on top. There's no shortcut to going where you want to go. You gotta put in the work to convince people at the ground level.

[–] VoterFrog 7 points 1 day ago (11 children)

It's like the dumbest version of the trolley problem where the tracks are reversed. You could do nothing and people will die. Or you could pull the lever (convince a bunch of people not to vote for Harris) and a lot more people will die but, hey, at least you can say you did something.

[–] VoterFrog 15 points 1 day ago* (last edited 1 day ago) (1 children)

... any more so than society could – or should – force them to serve as a human tissue bank or to give up a kidney for the benefit of another.

This fact is why abortion restrictions are unethical period. In no other situation do we allow the government to force a person to give up parts of their body to keep someone else alive, even their own child. But most people aren't ready to hear that.

[–] VoterFrog 5 points 2 weeks ago

He doesn't need a plan. Half the voters don't care if he has a plan. Plans are for Democrats.

[–] VoterFrog 4 points 2 weeks ago* (last edited 2 weeks ago)

Imagine thinking that’s a great way to convince people you’re the right person for the job…

Worse, imagine how stupid you'd have to be to actually be convinced that he's the right person for the job. And then despair, because half the voters are that fucking stupid.

[–] VoterFrog 1 points 3 weeks ago

No mention of Gemini in their blog post on sge And their AI principles doc says

We acknowledge that large language models (LLMs) like those that power generative AI in Search have the potential to generate responses that seem to reflect opinions or emotions, since they have been trained on language that people use to reflect the human experience. We intentionally trained the models that power SGE to refrain from reflecting a persona. It is not designed to respond in the first person, for example, and we fine-tuned the model to provide objective, neutral responses that are corroborated with web results.

So a custom model.

[–] VoterFrog 2 points 3 weeks ago* (last edited 3 weeks ago)

When you use (read, view, listen to…) copyrighted material you’re subject to the licensing rules, no matter if it’s free (as in beer) or not.

You've got that backwards. Copyright protects the owner's right to distribution. Reading, viewing, listening to a work is never copyright infringement. Which is to say that making it publicly available is the owner exercising their rights.

This means that quoting more than what’s considered fair use is a violation of the license, for instance. In practice a human would not be able to quote exactly a 1000 words document just on the first read but “AI” can, thus infringing one of the licensing clauses.

Only on very specific circumstances, with some particular coaxing, can you get an AI to do this with certain works that are widely quoted throughout its training data. There may be some very small scale copyright violations that occur here but it's largely a technical hurdle that will be overcome before long (i.e. wholesale regurgitation isn't an actual goal of AI technology).

Some licensing on copyrighted material is also explicitly forbidding to use the full content by automated systems (once they were web crawlers for search engines)

Again, copyright doesn't govern how you're allowed to view a work. robots.txt is not a legally enforceable license. At best, the website owner may be able to restrict access via computer access abuse laws, but not copyright. And it would be completely irrelevant to the question of whether or not AI can train on non-internet data sets like books, movies, etc.

[–] VoterFrog 0 points 3 weeks ago (2 children)

It wasn't Gemini, but the AI generated suggestions added to the top of Google search. But that AI was specifically trained to regurgitate and reference direct from websites, in an effort to minimize the amount of hallucinated answers.

[–] VoterFrog 3 points 3 weeks ago (2 children)

Point is that accessing a website with an adblocker has never been considered a copyright violation.

[–] VoterFrog 2 points 3 weeks ago* (last edited 3 weeks ago)

a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.

Not really. First of all, creative commons strictly loosens the copyright restrictions on a work. The strongest license is actually no explicit license i.e. "All Rights Reserved." No derivatives is already included under full, default, copyright.

Second, derivative has a pretty strict legal definition. It's not enough to say that the derived work was created using a protected work, or even that the derived work couldn't exist without the protected work. Some examples: create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, produce an image containing the most prominent color in every frame of a movie, or create a search index of the words found on all websites on the internet. All of that is absolutely allowed under even the strictest of copyright protections.

Statistical analysis of copyrighted materials, as in training AI, easily clears that same bar.

[–] VoterFrog 1 points 3 weeks ago

Unbeelievable

[–] VoterFrog 2 points 3 weeks ago

Don't forget to include article clippings praising Trump too.

Lest anybody think this is a joke. It's not. Trump's staffers literally had to shorten his briefs and fill them with pictures and positive article clippings telling him how awesome he is.

view more: next ›