theherk

joined 1 year ago
[–] theherk 2 points 5 days ago

Wait, so you’re saying the trip to the national anthropology museum was for work? Maybe.

[–] theherk 18 points 5 days ago (1 children)

You won’t catch me defending Apple’s repairability but you saying they need repaired often is completely untrue in my experience and that of the wider user baser. In fact their durability and longevity are very good. There are people still running around with 2014 models that have never been repaired.

I mean fuck their anti right to repair bullshit, but the machines are good.

[–] theherk 1 points 5 days ago (2 children)

Would she not be considered a tourist for her trip to Mexico City?

[–] theherk 17 points 5 days ago (3 children)

Yeah those MacBooks are really known for their poor build quality and terrible efficiency.

[–] theherk 71 points 1 week ago (2 children)

It isn’t like I’m not willing to pay. My NAS setup wasn’t exactly cheap. But the user experience is just incredible. I had Netflix for ten years, and several others for some time. The experience is just better. Watching whatever I want synchronized with my wife across devices of any type is superb. Who else offers that?

[–] theherk 2 points 2 weeks ago (1 children)

For what it’s worth, in spite of my poor choice of words and general ignorance on many topics, I agree with everything you said here, and find these fascinating topics. Especially that of our microbiome which I think by mass is larger than our brains; so who’s really doing the thinking around here?

[–] theherk 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

I'm slightly confused. Which part needs an academic paper? I've made three admittedly reductive claims.

  • Human brains are neural networks.
  • Its outputs are based on training data built from reinforcement.
  • We have a much more massive model than current artificial networks.

First, I'm not trying to make some really clever statement. I'm just saying there is a perspective where describing the human brain can generally follow a similar description. Nevertheless, let's look at the only three assertions I make here. Given that the term neural network is given its namesake from the neurons that make up brains, I assume you don't take issue with this. The second point, I don't know if linking to scholarly research is helpful. Is it not well established that animals learn and use reward circuitry like the role of dopamine in neuromodulation? We also have... education, where we are fed information so that we retain it and can recount it down the road.

I guess maybe it is worth exploring the third, even though, I really wasn't intending to make a scholarly statement. Here is an article in Scientific American that gives the number of neural connections around 100 trillion. Now, how that equates directly to model parameters is absolutely unclear, but even if you take glial cells where the number can be as low as 40-130 billion according to The search for true numbers of neurons and glial cells in the human brain: A review of 150 years of cell counting, that number is in the same order of magnitude of current models' parameters. So I guess, if your issue is that AI models are actually larger than the human brain's, I guess maybe there is something cogent. But given that there is likely at least a 1000:1 ratio of neural connections to neurons, I just don't think that is really fair at all.

[–] theherk 1 points 2 weeks ago (1 children)

Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.

[–] theherk 0 points 2 weeks ago (1 children)

Humans do the same thing. Have you heard of religion?

[–] theherk -2 points 2 weeks ago (15 children)

The human could be described in very similar terms. People think we’re magic or something, but we to are just a weighted neural network assembling outputs based strictly on training data built from reinforcement. We are just for the moment much much better with massive models. Of course that is reductive but many seem to forget that brains suffer similarly when outside of training data.

[–] theherk 0 points 2 weeks ago (3 children)

Can you show the question you asked that led to this and which model was used? I just tested in several models, even slightly older ones and they all answered precisely. Of course if you follow up and tell it the right answer is wrong you can make it say stuff like this, but not one got it wrong out of the gate.

[–] theherk 3 points 2 weeks ago
 

When you copy the URL for sharing in YouTube, it adds a query parameter now, so=blah, for tracking the source. This removes that. It could of course be smarter and stop at either the end or the next parameter, but since I haven’t seen any extras, I just remove everything after.

 

I am especially interested in the initial migrations into the Americas 15,000+ years ago, but our community is small and my interests large, so... any great documentaries are welcome.

 

Please, sincerely, from the bottom of my heart, allow us to disable this chapter skipping feature (the one where tapping left or right to bring up the scrubber, then double tapping the other direction because 100% of people want to skip that direction some unit time - 10 seconds by default). This ends up feeling random and is just vexing.

It is the worst feature added to any software, maybe ever in the history of computing. How many hours are wasted trying to figure out where one was in this video? How much power and network bandwidth is consumed fighting this feature that I’ve not seen a single comment online of anybody benefitting from ever.

This feature is adding to human suffering by wasting energy and damaging people psychologically. Go please, look online, and consider castigating the creator of this feature in the public square. And then take a good hard look at yourself for not stopping this evil from ever being added in the first place.

Yours aye, Sane People

 

I’m curious if they have made any public statements on the topic. Now that the deprecation of MV2 is back on a schedule, a lot of Chromium forks will be affected by the change.

I’m a huge FLOSS and Firefox fan, but Arc’s UX is unparalleled in my view and I’ve switched for the time.

I can’t find anything on their website, YouTube, or Discord that makes a firm statement on the topic, but it would be very reassuring if they would or have.

 

There are currently several applications available for iOS to access Lemmy instances. Each of which has its own benefits and drawbacks. I love Voyager (or wefwef as I still like to call it), but even the installed app is I believe just a repackaged PWA. So I’ve been looking at alternatives that vary from PWA to native Swift implementations. The list I’ve checked out so far are.

  • Avelon
  • Bean
  • Mlem
  • Memmy
  • Voyager / vger.app

I know Lemma is forthcoming, also.

I’m wondering what others current preferences are including values like price, license, governance, and features.

It feels to me like the days before Apollo arose where there were many great Reddit apps, but none that stood head and shoulders above the rest. Does anybody feel there is an app shining to that degree yet as Apollo did once it hit the scene?

114
Midden heap (lemmy.world)
submitted 11 months ago by theherk to c/reddit
 
261
Nyhavn (self.pics)
submitted 11 months ago by theherk to c/pics
view more: next ›