kraegar

joined 1 year ago
[–] [email protected] 4 points 1 year ago

I feel this has been the case already for more time than people think. AI/ML has been its own subspecialty of SWE for years. There are some low hanging fruit that using sklearn or copy and pasting from stack overflow will let you do, but for the most part the advanced features require professional specialization.

One thing that bothers me is that subject matter expertise is often ignored. General AI researchers can be helpful, but often times having SME context AND and AI skillset will be way more valuable. For LLMs it may be fine since they produce a generalized solution to a general problem, but application specific tasks require relevant knowledge and an understanding of pros/cons within the use case.

It feels like a hot take, but I think that undergraduate degrees should establish a base knowledge in a domain and then AI introduced at the graduate-level. Even if you are not using the undergraduate domain knowledge, it should be transferable to other domains and help you to understand how to solve problems with AI within the context of a professional domain.

[–] [email protected] 1 points 1 year ago (1 children)

Depending on what config data you need it might be a good idea to use environment variables. If all you need are server locations and credentials then environment variables are likely your best bet.

If you need fancy JSON or something else, global variables are nice.

[–] [email protected] 1 points 1 year ago

I think this is the beauty of federation. Everything is open and free to all rather than a company being able to lock in your personally created content.

For example, I wanted to learn about NLP and am working on building a bot to monitor sentiment and check for hate speech in lemmy content. I am still at the brainstorming/research phase, but the accessibility of lemmy makes it really nice.

Pythorhead was made for this exact purpose.

[–] [email protected] 2 points 1 year ago

I normally try and do "fun" work. This largely depends on how autonomous your job is. I was a PhD student doing research for a company and I received very little oversight for 3 years.

The supervision I did receive was great though. They understood needing to take a break and slow down. At those point I would generally read papers, watch PyData talks (highly recommend them, like inspirational ted talks for data people), or contribute to open source to learn about new tools or design paradigms.

[–] [email protected] 1 points 1 year ago

Welp. This is me. I spent a few hours debugging a failing test that was caused by a package update. If only I checked the changelogs...

[–] [email protected] 5 points 1 year ago

Open source contribution can be really great. I started contributing to a Python project that I have used extensively and it 100% improved my coding. It also can allow for you to interact with more experienced devs (depending on the project) and allows for you to get feedback.

[–] [email protected] 4 points 1 year ago

This has been my experience too. A junior dev at my last company kept trying to use ChatGPT to generate docket compose files and wondered why they generally didn't work.

My research has been on time series forecasting which is tangentially related to NLP. People are shocked when I point out to them that all these models do it predict the next token. Using weather forecasting has been a good analogy for why long AI generated texts are extra bad: weather forecasts get worse as the horizon increases.

Despite all my gripes about LLMs, I must say that copilot has saved me writing TONS of boilerplate code and unit tests.

[–] [email protected] 8 points 1 year ago (3 children)

I think there is definitely some echo chambering, since the average person isn't generally aware of AI. At the same time, mainstream media has been picking up the hype train a lot recently.

People hear my grad school studies involve AI/ML and I instantly get bombarded with questions about ChatGPT.

[–] [email protected] 2 points 1 year ago

My research area has been in time series forecasting and unsupervised anomaly detection, but it is SOMEWHAT related to NLP.

Papers with code had a few potential implementations: https://paperswithcode.com/paper/hyena-hierarchy-towards-larger-convolutional

I am always skeptical of papers. They could have good results, but how much did they adjust their experiment to look good on paper?

[–] [email protected] 1 points 1 year ago

I interviewed for a position that I was comfortably qualified for. As soon as they mentioned a 3 hour whiteboard interview in person I politely hung up the zoom call.

On the flip side, I had a company give the best interview process of all time. They told you how many people were remaining in the rounds. The programming task was to implement a hugging face model as a FastAPI. There was also a short video interview that took 5 minutes if you had basic ML knowledge. Likely took 1-2 hours tops and it was actually fun.

[–] [email protected] 1 points 1 year ago (1 children)

I started using poetry on a research project and was blown away at how good it is. Next week I start a new job and I am hoping it is the standard.

view more: next ›