this post was submitted on 12 Jun 2023
4 points (100.0% liked)

Machine Learning

1788 readers
11 users here now

founded 4 years ago
MODERATORS
 

This is an effort to get some discussion going.

I remember starting grad school and coming across reddit posts with themes like, "What research area will be hot in the next 10 years?", etc. In retrospect, the comments there were not very informed (talk of graphical models and bayesian non-parametrics). But, the heart of these posts is talking about a research area that you find exciting.

So, tell us what research area is currently exciting to you. Are you starting a new job, project, or graduate program to work on it?

top 11 comments
sorted by: hot top controversial new old
[–] radical_action 3 points 1 year ago (2 children)

I am excited about continual reinforcement learning (RL). When I first learned about RL, I thought that it was too general for its own good. And yet, continual learning lies outside of the scope of current RL fundamental research. It's an exciting time because very little is understood about how reinforcement learning methods work with neural networks on simple problems. Yet, many interesting problems require not only RL but continual learning (either because the environment is changing in some unknown way or the environment may include interaction with a human). We are still at the very early stages, but I expect there to be synergy with current developments like LLMs.

[–] jaded79 2 points 1 year ago

Yeah the scifi dream of robots you can talk to is why I'm most excited about LLMs and RL.

[–] [email protected] 1 points 1 year ago

Do you happen to have a good source to read up on continual RL that you can recommend? I am not familiar with this use case for RL.

[–] [email protected] 2 points 1 year ago

I'm looking forward to see the evolution of Energy-based models, and I would like to see how semantic knowledge (in form of graph embeddings or some other tool) may interact with Transformers models to inject higher-order information in text.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

I had the pleasure of conducting research into self-supervised learning (SSL) for computer vision.

What stood out to me was the simplicity of the SSL algorithms combined with the astonishing performance of the self-supervisedly trained models after supervised fine-tuning.

Also the fact that SSL works across tasks and domains, e.g., text generation, image generation, semantic segmentation...

[–] MachinaDoctrina 1 points 1 year ago

I too believe that SSL (and to some extent Unsupervised Learning) is by far the best way to frame learning problems in DL, it has shown to avoid the pesky mode collapse and improves out of distribution inference performance.

[–] MachinaDoctrina 2 points 1 year ago (1 children)

Graph Neural Networks are by far the coolest advance in DL architectures and it is also quite interesting that Transformers are simply fully connected Graph Attention Network

[–] kraegar 1 points 1 year ago (1 children)

I am really excited about GNNs too. I just submitted my PhD thesis on time series forecasting and anomaly detection for LTE/5G networks and I really think the next big jumps in that space are going to be graph-based.

My post-grad employment is in industry and I have a sneeking suspicion that the company will have me looking at them and I am pumped.

[–] MachinaDoctrina 1 points 1 year ago (1 children)

Would be interested in reading your thesis if your willing to link it or if you don't want to dox yourself can you dm me? (No seriously I'm new to lemmy is that a thing?)

[–] kraegar 1 points 1 year ago

I would love to share it, but I am waiting for two papers based on the work to finish peer review.

Poke me in August!

[–] [email protected] 1 points 1 year ago

More of an off and on hobbyist when I'm not busy trying to survive, so I've still got a lot to study. For the most part, I've been enthralled by the progress we are making in general understanding of neural functions. I feel like the more we learn in machine learning, the more we can deconstruct the mountain of data we've gathered about the brain this past couple decades. The more we understand that, the more we can intentionally apply or avoid in the development of neural nets.

From deconstructing the complex algorithms that allow brains and bodies to develop from a couple cells, to understanding the absurd organic mess that allows the tangle of processes that we use to comprehend our own consciousness. There are a lot of people who seem very excited about this problem from very different angles, I just hope they can cooperate more and argue less about how their method is the only viable method and everyone else is just wrong.

Like certain people's dismissal of probabilistic models. I think it's silly to argue that our brains definitely do not utilize such autoregressive functioning as a piece of the puzzle. I just think our brain has many different systems working in tandem.

Sometimes we just allow brain to push out words without much thought. We might have to backtrack and correct ourselves if the wrong words come out. We will often pre-empt information or calculations into our short term memory before choosing to speak. Sometimes we stop mid sentence to apply these processes. I still believe there is an element of stochastic word selection.

Yann Lecun has a really good model for developing an autonomous system, but I think he's too eager to disregard how autoregressive models can be applied in a more complex system.

Regardless, everything everyone is working on right now is so exciting on every level. I can't wait to see what else comes. Any advancement at any angle could have severe effects on our lives at this point. Our economy has to adapt without sacrificing all of the poor, and we already have to deal with people who can't comprehend how basic LLMs aren't sentient emotional beings.

Hopefully I can keep learning more about machine learning and neurology. I hope the Lemmy community can grow and we can see as much as there was on Reddit. Don't know if anyone can set up a bot like "AI lover" on /r/machinelearningnews It was a nice feed of new and interesting papers. I just got tired of Reddit becoming progressively worse over time.

load more comments
view more: next ›