this post was submitted on 16 Sep 2024
92 points (100.0% liked)

TechTakes

1435 readers
145 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
all 30 comments
sorted by: hot top controversial new old
[–] [email protected] 31 points 2 months ago (2 children)

have they tried writing better prompts? my lived experience says that because it works for me, it should work as long as you write good prompts. prompts prompts prompts. I am very smart. /s

[–] [email protected] 15 points 2 months ago (1 children)

@swlabr @jaschop

I fixed the quote from the article "programmers are not known for being great at writing prompts because many of us find the whole idea offensive and stupid"

[–] [email protected] 17 points 2 months ago

I'm reminded of the guy in a previous thread who claimed LLMs helped him as a rubber duck partner. You know - the troubleshooting technique named for its efficacy when working with a bath toy.

[–] regrub 8 points 2 months ago

Prompt engineering is the same as software engineering, right?

[–] [email protected] 21 points 2 months ago (2 children)

Someone will have the "brilliant" idea to fix this by having chatbots review code in 5... 4... 3...

[–] [email protected] 13 points 2 months ago

Welcome to my new startup where we train LLMs on compiled binaries. Now you can just prompt and get a complete executable, no coding knowledge needed. We value our company at $5b, product launch date indeterminate

[–] [email protected] 8 points 2 months ago (1 children)

I could swear I’ve seen a shartup with this pitch

will try check tomorrow, rn I’m enjoying the sounds of the first thunderstorm of the season

[–] [email protected] 9 points 2 months ago* (last edited 2 months ago) (4 children)

Thanks now you've sent me down the rabbit hole since I searched for this and clicked on the first ad: coderabbit.ai

One of the code reviews they feature on their homepage involves poor CodeRabbit misspelling a variable name, and then suggesting the exact opposite code of what would be correct for a "null check" (Suggesting if (object.field) return; when it should have suggested if (!object.field) return; or something like that).

You'd think AI companies would have wised up by this point and gone through all their pre-recorded demos with a fine comb so that ~~marks~~ users at least make it past the homepage, but I guess not.

Aside: It's not really accurate to describe if (object.field) as a null check in JS since other things like empty strings will fail the check, but maybe CodeRabbit is just an adorable baby JS reviewer!

Aside: the example was in a .jsx file. Does that stand for JavaScript XML? because oh lord that sounds cursed

[–] [email protected] 6 points 2 months ago (2 children)

JSX is JavaScript, but you can also just put HTML in it (with bonus syntax for embedding more JS expressions inside) and it can get transpiled into function calls, which means it'll result in an object structure representing the HTML you wrote. It's used so that you can write a component as a function that returns HTML with properties already computed in and any special properties, like event listeners, passed as function references contained in the structure.

[–] [email protected] 6 points 2 months ago

that's a hell of a lot of words for "is a giant pile of mistakes"

[–] [email protected] 4 points 2 months ago
[–] [email protected] 6 points 2 months ago

You’d think AI companies would have wised up by this point and gone through all their pre-recorded demos with a fine comb so that ~~marks~~ users at least make it past the homepage, but I guess not.

The target group for their pitch probably isn't people who have a solid grasp of coding, I'd bet quite the opposite.

[–] [email protected] 5 points 2 months ago

sorry, the reality is worse

[–] [email protected] 3 points 2 months ago (1 children)

why do so many awful tech companies have rabbit in their names

[–] [email protected] 4 points 2 months ago

Because rabbits are cute and fluffy and good and it is the solemn mission of all terrible tech companies to take the things you love and make you associate them with useless AI products.

[–] [email protected] 20 points 2 months ago

"When asked about buggy AI [code], a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”

Strong they cut all my deadlines in half and gave me an OpenAI API key, so fuck it energy.

He stressed that this is not from want of care on the developer’s part but rather a lack of interest in “copy-editing code” on top of quality control processes being unprepared for the speed of AI adoption.

You don't say.

[–] [email protected] 11 points 2 months ago

For some reason when I read this I am reminded of our "highly efficient rail" which often derails

[–] [email protected] 8 points 2 months ago

LLMs will save us from having to work on features now that we nearly ironed out all the issues introduced by Kubernetes.

[–] pyre 7 points 2 months ago

"i don't know what happened, the truck was cruising just fine when we put the toddler on the wheel"

[–] [email protected] 7 points 2 months ago (1 children)

Buahahahaha, lazy fucks just do the work

[–] [email protected] 8 points 2 months ago

as I have said here some time ago, these chucklefucks are a goldmine waiting to happen. just not the kind of gold they think.

[–] FinalRemix 2 points 2 months ago

Oh no! Spicy autocomplete can't proofread or understand what it's spitting out?!