jeremyparker

joined 10 months ago
[–] [email protected] 7 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Serious talk now, everyone. Jokers step aside for a moment.

If you're broke, and don't mind fleecing sovcits, you can make yourself a website with made up credentials, degrees, and publications. Then, you respond to posts like that with extremely confident affirmations.

Yes, absolutely you can. The thing is, because the original Ancient Tablet law structure was never explicitly repealed, it has been used successfully in court in the USA and Australia to have suspended licenses purged from the database (any fines you may have remaining will be eliminated as well). See federal cases US vs Leewater, US vs Washington Johnson, and US vs Phil, Gill, and Jill.

If you don't have the forms and such I'd be happy to help out, email me at [email protected]

[–] [email protected] 4 points 3 weeks ago

My journey was Windows-> Ubuntu -> Mint -> Fedora -> Arch.

(Infuriatingly i still use windows for gaming, but nothing else.)

Did i mention that i use arch?

More importantly:

fucked up all my data with no backup.

One time i messed up a script and accidentally copied 40,000 mp3s to the same filename. 20 years of music collecting, literally going back to Napster, all gone.

Well, not completely gone. I've got everything uploaded to iBroadcast, and I'm pretty sure i can download my library. But I'm not sure i deserve to.

[–] [email protected] 9 points 3 weeks ago* (last edited 3 weeks ago)

Surely it's because they want to increase the amount they pay the musicians.

[–] [email protected] 1 points 3 weeks ago (1 children)

If you like to upload your own music (like Google music), iBroadcast is the tippy tops. You can still use bandcamp (with or without yt-dlp) for discovery, and then upload what you like to iBroadcast.

[–] [email protected] 4 points 3 weeks ago

iBroadcast is what i use. That plus rutracker and you can sail the high seas like it's 1699.

[–] [email protected] 3 points 3 weeks ago (1 children)

HTML is pretty straightforward so just understanding the very basic stuff is probably all you need. CSS is where html gets any challenge it might have.

CSS is weird because it's very "easy" so "real developers" kind of object to learning it, but the truth is, if you gave any of them a layout design, they probably couldn't build it. There are tools like tailwind to help, but, IMO, tailwind just helps you avoid learning css's vocabulary, but you just replace it with having to learn tailwind's vocabulary.

JavaScript on the other hand is a "real" programming language, though decidedly quick-n-dirtier than other languages. It lets you be a lot more sloppy. (Tbh it's a lot more forgiving than css!). As a result, it lacks the elegance and control that "real developers" like -- and, as most people's first language, it lets newcomers get into bad habits. For these reasons, JavaScript is a bit derided -- but, unlike CSS, most developers can't avoid it.

There are a few key ideas in JavaScript that, once you understand them, things make a lot more sense. (I won't get into them now, since it doesn't sound like you're at the point where that kind of clarity would help, but, when you are, come on back here and make a post!)

TLDR: HTML is definitely something you can just pick up along the way. JavaScript is a real language that will take a little while to feel comfortable with, and it will take a career to master. CSS will never be easy, so don't let it hold you back.

[–] [email protected] 9 points 1 month ago (2 children)

Hi everyone, JP here. This person is making a reference to the Weird Al biopic, and if you haven't seen it, you should.

Weird Al is an incredible person and has been through so much. I had no idea what a roller coaster his life has been! I always knew he was talented but i definitely didn't know how strong he is.

His autobiography will go down in history as one of the most powerful and compelling and honest stories ever told. If you haven't seen it, you really, really should.

ITT NO SPOILERS PLS

[–] [email protected] 25 points 1 month ago (1 children)

I dated a girl named Password for a while. She was a lot older than me, she was born in the year 1234.

Anyway, @op the exact same thing happened to me. I gotta get smarter about opsec.

[–] [email protected] 3 points 1 month ago

This isn't true. AI can generate tan people if you show them the color tan and a pale person -- or green people or purple people. That's all ai does, whether it's image or text generation -- it can create things it hasn't seen by smooshing together things it has seen.

And this is proven by reality: ai CAN generate csam, but it's trained on that huge image database, which is constantly scanned for illegal content.

[–] [email protected] 1 points 1 month ago (1 children)

I guess my question is, why would anyone continue to "consume" -- or create -- real csam? If fake and real are both illegal, but one involves minimal risk and 0 children, the only reason to create real csam is for the cruelty -- and while I'm sure there's a market for that, it's got to be a much smaller market. My guess is the vast majority of "consumers" of this content would opt for the fake stuff if it took some of the risk off the table.

I can't imagine a world where we didn't ban ai generated csam, like, imagine being a politician and explaining that policy to your constituents. It's just not happening. And i get the core point of that kind of legislation -- the whole concept of csam needs the aura of prosecution to keep it from being normalized -- and normalization would embolden worse crimes. But imagine if ai made real csam too much trouble to produce.

AI generated csam could put real csam out of business. If possession of fake csam had a lesser penalty than the real thing, the real stuff would be much harder to share, much less monetize. I don't think we have the data to confirm this but my guess is that most pedophiles aren't sociopaths and recognize their desires are wrong, and if you gave them a way to deal with it that didn't actually hurt chicken, that would be huge. And you could seriously throw the book at anyone still going after the real thing when ai content exists.

Obviously that was supposed to be children not chicken but my phone preferred chicken and I'm leaving it.

[–] [email protected] 3 points 1 month ago (4 children)

Follow up question -- I'm not OP but I'm another not-really-new developer (5 years professional xp) that has 0 experience working with others:

I have trouble understanding where to go on the spectrum of "light touch" and "doing a really good job". (Tldr) How should a contributor gauge whether to make big changes to "do it right" or to do it a little hacky just to get the job done?

For example, I wanted to make a dark mode for a site i use, so i pulled the sites's repo down and got into it.

The CSS was a mess. I've done dark modes for a bunch of my own projects, and I basically just assign variables (--foreground-color, --background-color), and then swap their assignments by the presence or absence of a ".dark-mode" class in the body tag.

But the site had like 30 shades of every color, like, imperceptibly different shades of red or green. My guess was the person used a color picker and just eyeballed it.

If the site was mine, I would normalize them all but there was such a range -- some being more than 10-15% different from each other -- so i tried to strike a balance in my normalization. I felt unsure whether this was done by someone who just doesn't give a crap about color/CSS or if it was carefully considered color selection.

My PR wasn't accepted (though the devs had said in discord that i could/should submit a PR for it). I don't mind that it wasn't accepted, but i just don't know why. I don't want to accidentally step on toes or to violate dev culture norms.

[–] [email protected] 31 points 1 month ago* (last edited 1 month ago)

Me: Oh, I get it, this "Lemmy" website -- it's like The Onion but for nerds?

My fellow lemmings: No, they're serious. run0 is real.

Me: Hah. The Onion, but for nerds! I love it.

 

In CSS, let's talk about srcset or image-set. In that context, you can define which image the browser loads using 1x, 2x, 3x, etc. These refer to pixel density. (In the case of srcset, you can use pixel dimensions too, which sidesteps the issue I'm going to talk about, but it still occurs in image-set, and also is still weird to me in srcset, even if you can side step it.)

So, assuming, say, a 20" monitor with 1080p resolution is 1x, then a 10" screen with 1080p would be, technically, 2x - though, in the real world, it's more like a 6" screen has a 1000x2500 resolution - so, I don't care about math, that's somewhere between 2x and 3x.

Let's imagine a set of images presented like this:

srcset(image_1000x666.webp 1x,
image_1500x1000.webp 2x,
image_3000x2000.webp 3x)

then an iphone 14 max (a 6"-ish screen with a 1000x2500-ish resolution, for a 2-3x pixel density), would load the 3000x2000 image, but my 27", 1440p monitor would load the 1000x666px image.

It seems intuitively backwards - but I've confirmed it - according to MDN, 1x = smaller image, 3x = larger image.

But as I understand it, an iphone 14 acts as if its a 300x800 screen - using the concept of "points" instead of pixels - which, in the context of "1x" image size makes a lot of sense - but the browser isn't reading that, all it seems to care about is how many pixels are in an inch.

I made a little page to demonstrate the issue, tho I acknowledge it's not hugely helpful, since, other than using your actual eyeballs, it's hard to tell which image is loaded in the scrset example, but take a look if you want.

https://germyparker.github.io/image-srcset-example/

view more: next ›