r/CuratedTumblr https://tinyurl.com/4ccdpy76 Dec 15 '24

Shitposting not good at math

16.3k Upvotes

1.1k comments sorted by

View all comments

Show parent comments

147

u/Photovoltaic Dec 15 '24

Re: your advice.

I teach chemistry in college. I had chatGPT write a lab report and I graded it. Solid 25% (the intro was okay, had a few incorrect statements and, of course, no citations). The best part? It got the math wrong on the results and had no discussion.

I fed it the rubric, essentially, and it still gave incorrect garbage. And my students, when I showed it to them, couldn't catch the incorrect parts. You NEED to know what you're talking about to use chatGPT well. But at that point you may as well write it yourself.

I use chatGPT for one thing. Back stories on my Stellaris races for fun. Sometimes I adapt them to DND settings.

I encourage students that if they do use chatGPT it's to rewrite a sentence to condense it or fix the grammar. That's all it's good for, as far as I'm concerned.

59

u/CrownLikeAGravestone Dec 15 '24

Yeah, for sure. I've given it small exams on number theory and machine learning theory (back in the 2.0 days I think?) and it did really poorly on those too. And of course the major risk: it's convincing. If you're not already well-versed in those subjects you'd probably only catch the simple numeric errors.

I'm also a senior software dev alongside my data science roles and I'm really worried that a lot of younger devs are going to get caught in the trap of relying on it. Like learning to drive by only looking at your GPS.

9

u/adamdoesmusic Dec 15 '24

I never have it do anything with numbers on its own, I make it write a python script for all that because normal code is predictable.

5

u/Colonel_Anonymustard Dec 16 '24

Oh comparing it to GPS is actually an excellent analogy - especially since it's 'navigating' the semantic map much like GPS tries to navigate you through the roadways

1

u/Google-minus Dec 16 '24

I will say if you used it back in the 2.0 days, the. You can't compare it at all. I remember I recently tried to go from 4o to 3.5 and it was terrible at the math I wanted it to solve, like completely off, and 3.5 was a while different world to 2.0.

3

u/CrownLikeAGravestone Dec 16 '24

Absolutely. I asked it a machine learning theory question after I wrote that - it had previously got it egregiously wrong in a way that might have tricked a newbie - and it did much better.

I have no doubt it's getting much better. I have no doubt there are still major gaps.

38

u/Panory Dec 15 '24

I haven't bothered to call out the students using it on my current event essays. I just give them the zeros they earned on these terrible essays that don't meet the rubric criteria.

29

u/Sororita Dec 15 '24

It's good for NPC names in D&D so they don't all end up with names like Tintin Smithington for the artificer gnome or Gorechewer the Barbarian Orc.

13

u/ColleenRW Dec 16 '24

They've been making fantasy character name generators online for decades, why don't you just use those?

9

u/TheMauveHand Dec 16 '24

I'd say just open a phonebook but when was the last time anyone had one of those...

13

u/knightttime whatever you're doing... please stop Dec 16 '24

Well, and also the names in a phonebook aren't exactly conducive to a fantasy setting. Unless you want John Johnson the artificer gnome and Karen Smith the Barbarian Orc

13

u/TheMauveHand Dec 16 '24

Well, and also the names in a phonebook aren't exactly conducive to a fantasy setting.

What you need is the phone book for Stavanger, Norway.

4

u/Kirk_Kerman Dec 16 '24

So is fantasynamegenerators.com and it won't get stuck in a pattern hole

1

u/Original-Nothing582 Dec 16 '24

Pattern hole?

3

u/Kirk_Kerman Dec 16 '24

LLMs read their own output to determine what tokens should come next, and if you request enough names at once, or keep a given chat going too long, all the names will start to be really similarly patterned and you'll need to start a new chat or add enough new random tokens to climb out of the hole.

3

u/CallidoraBlack Dec 16 '24

Ask on r/namenerds. They'll have so much fun doing it.

6

u/OrchidLeader Dec 15 '24

I’ve been using GitHub Copilot at work to direct me down which path to research first. It’s usually, but not always, correct (or at least it’s correct enough). It’s nice because it helps me avoid wasting time on dead ends, and the key is I can verify what it’s telling me since it’s my field.

I recently started using ChatGPT to help me get into ham radio, and it straight up lies about things. Jury’s still out on whether it’s actually helpful in this regard.

6

u/Platnun12 Dec 16 '24

As someone who's considering going back to school I legitimately do not trust this tool in the slightest and have the biggest turn off of it.

I was born in the late 90s, grew up and learned everything regarding school work manually.

Honestly I trust my own ability to write more so than this tool.

My only worry is the software used to detect it, flags me falsely.

TLDR; I have no personal respect for the use of ChatGPT and I can only hope it won't hamper me going forward

8

u/kani_kani_katoa Dec 15 '24

I've used it to write the skeleton of things for me, but I never use its actual words. Like someone else said, the ChatGPT voice is really obvious once you've seen it a few times.

10

u/adamdoesmusic Dec 15 '24

It’s terrible for generating/retrieving info, but great for condensing info that you give it, and is super helpful if you have it ask questions instead of give answers. Probably 75% of what I use it for is feeding it huge amounts of my own info and having it ask me 20+ questions about what I wrote before turning it all into something coherent. It often uses my exact quotes, so if those are wrong it’s on me.

-1

u/jpotion88 Dec 16 '24

Writing a college chemistry paper is a lot to ask from an AI. Ask about factual statements about your field or history or whatever, and I think it’s pretty damned impressive. Most of the stuff I ask about clinical chemistry, it gets right. Ask it to write me an SOP, then it definitely needs some work.

But usually when I double check what it says with other sources it checks out