RandomCanuck

joined 1 year ago
 

A parliamentary petition has been started to try to encourage the Canadian Government to join the Fediverse. If you want to support this petition, follow the link to Chris Alameny’s post where you’ll find links to both French and English versions of the petition. You have to be a Canadian citizen to sign the petition. We only need a few more signatures to get over the 500 needed. We’d love to see every Province and Territory represented.

[–] [email protected] 1 points 1 year ago

Nothing. Waterloo is a fine place. I lived there for quite a while. But, I’ve been in Kitchener for 30 years, so when I found the “official” Kitchener page, I followed it. Just thought I’d wave to see if anyone else was here.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

My deepest apologies for the typo. Your policing is deeply appreciated.

 

Great essay by Scrimshaw on the state of play in the OLP. It’s likely they won’t survive another election if they continue with their internal delusions.

 

I always enjoy Brittlestar’s stuff, but this week’s essay seem particularly appropos.

[–] [email protected] 2 points 1 year ago (2 children)

Ya just gotta wonder how the cetacean one got on the books in the first place. I mean, was somebody running around randomly impregnating whales with stolen sperm at some point?

[–] [email protected] 3 points 1 year ago

Interesting paper that points out the likelihood that extensive use of generative AI on the web will eventually cause theses models to collapse. The likely outcome is even wilder hallucinations and eventually, gibberish.

 

In this paper, we show that Generative Adversarial Networks (GANs) suffer from catastrophic forgetting even when they are trained to approximate a single target distribution. We show that GAN training is a continual learning problem in which the sequence of changing model distributions is the sequence of tasks to the discriminator. The level of mismatch between tasks in the sequence determines the level of forgetting. Catastrophic forgetting is interrelated to mode collapse and can make the training of GANs non-convergent. We investigate the landscape of the discriminator's output in different variants of GANs and find that when a GAN converges to a good equilibrium, real training datapoints are wide local maxima of the discriminator. We empirically show the relationship between the sharpness of local maxima and mode collapse and generalization in GANs. We show how catastrophic forgetting prevents the discriminator from making real datapoints local maxima, and thus causes non-convergence. Finally, we study methods for preventing catastrophic forgetting in GANs.

 

Here’s the state of the art in AI, according to Stanford.

 

Generally, it seems like AI experts are divided about how close we are to developing an AGI, and how close any of this might take us to an extinction level event. On the whole, they seem less likely to think that AI will kill us all. Maybe.

[–] [email protected] 1 points 1 year ago

What about the Regions, like York Region, Waterloo Region, etc? Would it make sense to have lemmy communities for them, too?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I'm reading about applications for AI in safety related control systems for machinery. Finding clear guidance on risk assessment for AI systems has been quite difficult. I’d like to talk to anyone who has experience in this area.

 

Hi all, new to Lemmy but a KW resident for more than 30 years. Looking forward to seeing some stuff going on here.