this post was submitted on 26 Jul 2023
37 points (95.1% liked)

Programming

17668 readers
186 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
all 12 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 1 year ago (1 children)

When you use AI features, the IDE needs to send your requests and code to the LLM provider. In addition to the prompts you type, the IDE may send additional details, such as pieces of your code, file types, frameworks used, and any other information that may be necessary for providing context to the LLM.

Doesn't sound like it gives you much transparency or control over the data it sends no matter which feature you use. Sadly not usable at my job then.

[–] [email protected] 8 points 1 year ago (2 children)

That's a bummer. We're strictly regulated and stuff like this needs to be self hosted or we can't use it

[–] [email protected] 5 points 1 year ago

I worked for regulated companies too, I suspect that most companies would forbid such tools even if it's not a dependency to build the software, especially if the source code is sent to random people. IMHO, every tool should be stored locally (e.g. on a proxy server) to make sure that the whole project can be recreated in a few minutes should something bad happens (or from scratch in a CI). As long as AIs rely on private companies on the internet, I wouldn't use those tools.

[–] [email protected] 2 points 1 year ago

They do say this though "We also plan to support local and on-premises models. For local models, the supported feature set will most likely be limited."

This is currently a no go at my place (I asked) but the ai and security folks were interested in that as it would allow on-prem/private cloud usage as well as the possibility of using targeted models instead of a generic one.

For example, in the comments on their announcement they confirm they are looking at Azure AI support.

[–] [email protected] 3 points 1 year ago (1 children)

give me unit test generation and I’m bought

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

I mean, the inverse is probably more productive. Specify the observable behaviors you want and let the “AI” build the software.

[–] [email protected] 1 points 1 year ago (1 children)
[–] [email protected] 2 points 1 year ago
[–] [email protected] 2 points 1 year ago (1 children)

That would cover like 80% of my work related usage of chatgpt and do stuff I didn't think of outsourcing to AI. I'll probably unsubscribe from co pilot too.

Can't wait for it to be in Goland.

[–] [email protected] 3 points 1 year ago

If you're willing to use the EAP version, its already there.