this post was submitted on 04 Sep 2024
915 points (98.2% liked)

Technology

60096 readers
2861 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale. 

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 3 months ago (7 children)

Why are you so confident that the things you are learning from AI are correct? Are you just using it to gather other sources to review by hand or are you trying to have conversations with the AI?

We've all seen AI get the correct answer but the show your work part is nonsense, or vice versa. How do you verify what AI outputs to you?

[–] GaMEChld 8 points 3 months ago (1 children)

You check it's work. I used it to calculate efficiency in a factory game and went through and made corrections to inconsistencies I spotted. Always check it's work.

[–] [email protected] 3 points 3 months ago

Exactly. It's a helpful tool but it needs to be used responsibly. Writing it off completely is as bad a take as blindly accepting everything it spits out.

[–] rainerloeten 5 points 3 months ago

I use it for explaining stuff when studying for uni and I do it like this: If I don't understand e.g. a definition, I ask an LLM to explain it, read the original definition again and see if it makes sense.

This is an informal approach, but if the definition is sufficiently complex, false answers are unlikely to lead to an understanding. Not impossible ofc, so always be wary.

For context: I'm studying computer science, so lots of math and theoretical computer science.

[–] [email protected] 3 points 3 months ago (1 children)

I'm not at all confident in the answers directly. I've gotten plenty of wrong answers form AI and I've gotten plenty of correct answers. If anything it's just more practice for critical thinking skills, separating what is true and what isn't.

When it comes to math though, it's pretty straightforward, I'm just looking for context on some steps in the problems, maybe reminders of things I learned years ago and have forgotten, that sort of thing. As I said, I'm interested in actually understanding the stuff that I'm learning because I am using it for the things I'm working on so I'm mainly reading through textbooks and using AI as well as other sources online to round out my understanding of the concepts. If I'm getting the right answers and the things I am doing are working, it's a good indicator I'm on the right path.

It's not like I'm doing cutting edge physics or medical research where mistakes could cause lives.

[–] [email protected] 1 points 3 months ago (1 children)

Its sort of similar to saying poppy production overall is pretty negative, but if smart critical people use it sparingly and apprehensively, opiates could be of great benefit to that person.

Thats all well and good and all but AI is not being developed to help critical thinkers research slightly easier, its being created to reduce the amount of money companies spend on humans.

Until regulations are in place to guide the development of the technology in useful ways then I dont know any of it should be permitted. What's the rush for anyways?

[–] [email protected] 2 points 3 months ago (1 children)

Well I'm definitely not pushing for more AI and I like to try to stay nuanced on the topic. Like I mentioned in my first comment I have found it to be a very helpful tool but if used in other ways it could do more harm than good. I'm not involved in making or pushing AI but as long as it is an available tool I'm going to make use of it in the most responsible way I can and talk about how I use it knowing that I can't control what other people do but maybe I could help some people who are only using it to get answer hints like in the article to find more useful ways of using it.

When it comes to regulation, yeah I'm all for that. It's a sad reality that regulation always lags behind and generally doesn't get implemented until there's some sort of problem that scares the people in power who are mostly too old to understand what's happening anyways.

And as to what's the rush, I would say a combination of curiosity and good intentions mixed with the worst of capitalism, the carrot of financial gain for success and the stick of financial ruin for failure and I don't have a clue what percent of the pie each part makes up. I'm not saying it's a good situation but it's the way things go and I don't think anyone alive could stop it. Once something is out of the bag, there ain't any putting it back.

Basically I'm with you that it will be used for things that make life worse for people and that sucks, and it would be great if that was not the case but that doesn't change the fact that I can't do anything about that and meanwhile it can still be a useful tool and so I'm going to use it the best that I can regardless how others use it because there's really nothing I can do except keep pushing forward the best I can, just like anyone else.

[–] [email protected] 1 points 3 months ago

It might just be the difference in perspective. I agree with your assessments if how things are but not how they will be in the future. There are countries that are more responsible in their research, so I know its possible. Its all politics and I dont believe in giving up on social change just yet.

[–] NikkiDimes 2 points 3 months ago* (last edited 3 months ago) (1 children)

I personally use it's answers as a jumping off point to do my own research, or I ask it for sources directly about things and check those out. I frequently use LLMs for learning about topics, but definitely don't take anything they say at face value.

For a personal example, I use ChatGPT as my personal Japanese tutor. I use it discuss and break down nuances of various words or sayings, names of certain conjugation forms etc. etc., and it is absolutely not 100% correct, but I can now take the names of things that it gives me in native Japanese that I never would have known and look them up using other resources. Either it's correct and I find confirming information, or it's wrong and I can research further independently or ask it follow up questions. It's certainly not as good as a human native speaker, but for $20 a month and as someone who likes enjoys doing their own research, I fucking love it.

[–] [email protected] 2 points 3 months ago (1 children)

Hey, that's a cool thing to do! I'll try it. Learning a new language through LLMs sounds cool.

[–] NikkiDimes 1 points 3 months ago

It is! Just be aware that it won't always be right. It's good to verify things with additional sources (as with anything, really).

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

I, like the OP, was also studying math from a textbook and using GPT4 to help clear things up. GPT4 caught an error in the textbook.

The LLM doesn't have a theory of mind, it wont start over and try to explain a concept from a completely new angle, it mostly just repeats the same stuff over and over. Still, once I have figured something out, I can ask the LLM if my ideas are correct and it sometimes makes small corrections.

Overall, most of my learning came from the textbook, and talking with the LLM about the concepts I had learned helped cement them in my brain. I didn't learn a whole lot from the LLM directly, but it was good enough to confirm what I learned from the textbook and sometimes correct mistakes.

[–] [email protected] 1 points 3 months ago

If you didn't have AI, what would you have done instead?

[–] [email protected] 1 points 3 months ago

He is cross checking

[–] [email protected] -1 points 3 months ago (1 children)

I mean, why are you confident the work in textbooks is correct? Both have been proven unreliable, though I will admit LLMs are much more so.

The way you verify in this instance is actually going through the work yourself after you’ve been shown sources. They are explicitly not saying they take 1+1=3 as law, but instead asking how that was reached and working off that explanation to see if it makes sense and learn more.

Math is likely the best for this too. You have undeniable truths in math, it’s true, or it’s false. There are no (meaningful) opinions on how addition works other than the correct one.

[–] [email protected] 3 points 3 months ago

The problem with this style of verification is that there is no authoritative source. Neither the AI nor yourself is capable of verifying for accuracy. The AI also has no expectation of being accurate or revised.

I don't see how this is any better than running google searches on reddit or other message boards looking for relevant discussions and basing your knowledge on those.

If AI was enabling something new that might be worth it but allowing someone to find slightly less/more shitty message board posts 10% more efficiently isnt worth what's happening. There are countries that are capable of regulation as a field fills out, why can't america? We banned tiktok in under a month didnt we?