Hopps

joined 2 years ago
MODERATOR OF
[–] Hopps 17 points 2 years ago (3 children)

That's a bit much, I'm not looking to get into a relationship here. /s

[–] Hopps 14 points 2 years ago* (last edited 2 years ago)

Can we just take a moment to acknowledge just how far away this horse actually was. During the GameCube era video game studios weren't afraid to say that they would put a horse far away in a game and actually deliver that.

These days it's all half-promises with pre order trailers hinting at far away horses. But when it's time to deliver all we get is 'our servers were overwhelmed by the huge numbers of players logging in, so we have to temporarily put the horse right in front of your face - we promise we will fix it soon!'

The fix never comes, the horse is never far away.

I miss the old days, it was a different time back then.

[–] Hopps 3 points 2 years ago

Based on my research, there is an emerging interest in the field of meta-learning, or "learning to learn." Some researchers are exploring the concept of allowing neural networks to learn their own hyperparameters, which could include parameters of activation functions. However, it's my understanding that this approach could lead to more complex training processes and risks such as unstable gradients, and it might not always result in significantly better performance.

While activation functions with learnable parameters aren't commonly used, there is ongoing research that explores them. One such example is the Parametric ReLU (PReLU) function - a variant of the ReLU activation function that allows the negative slope to be learned during training, as opposed to being a predetermined hyperparameter.

In my opinion, if you're new to this field, it's essential to grasp the basics of neural networks, including understanding how common activation functions like ReLU, sigmoid, tanh, etc., operate. These advanced concepts are undoubtedly fascinating and might offer incremental improvements, but even most of today's state-of-the-art models primarily use these "standard" activation functions. So, starting with a solid foundation is key.

[–] Hopps 5 points 2 years ago (1 children)

It's okay, I thought they were mini-lightsabers

[–] Hopps 1 points 2 years ago

That's great news!

[–] Hopps 2 points 2 years ago (1 children)

Interesting point on the human parallel.

[–] Hopps 4 points 2 years ago* (last edited 2 years ago)

The plugin you are mentioning is based on dislikes and yes it is very inaccurate. The one I mentioned works off of the ratio between the likes vs the view count so the accuracy is always there, it's a different way of going about it.

I agree that YouTube just needs to bring the dislike count back, it's a pain trying to find these alternative ways to know if a video is good when the data is there. It's so greedy of them, outright harming user experience for profit.

[–] Hopps 2 points 2 years ago (1 children)

I second that, it's been very useful for coding/debugging for me too. And the cool part is that it's only going to get better.

[–] Hopps 3 points 2 years ago (1 children)

I have been using GPT4 as a Google replacement and it's been working out fairly well.

[–] Hopps 10 points 2 years ago (2 children)

Ever since dislikes were removed I use a plugin that shows the ratio of likes to views to determine if a video is worth watching.

Most of the time if the likes to views is >= 2% then it's an okay vid.

[–] Hopps 8 points 2 years ago (1 children)

Jerboa needs this added

[–] Hopps 1 points 2 years ago

I love this, thank you!

view more: ‹ prev next ›