this post was submitted on 24 Jun 2023
9 points (100.0% liked)

Actually Useful AI

1990 readers
1 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

TL;DR (by GPT-4 ๐Ÿค–):

The article titled "Itโ€™s infuriatingly hard to understand how closed models train on their input" discusses the concerns and lack of transparency surrounding the training data used by large language models like GPT-3, GPT-4, Google's PaLM, and Anthropic's Claude. The author expresses frustration over the inability to definitively state that private data passed to these models isn't being used to train future versions due to the lack of transparency from the vendors. The article also highlights OpenAI's policy that data submitted by API users is not used to train their models or improve their services. However, the author points out that the policy is relatively new and data submitted before March 2023 may have been used if the customer hadn't opted out. The article also brings up potential security risks with AI vendors logging inputs and the possibility of data breaches. The author suggests that openly licensed models that can be run on personal hardware may be a solution to these concerns.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here