dartos

joined 1 year ago
[–] [email protected] 45 points 9 months ago (2 children)

I’m dead 💀

[–] [email protected] 2 points 9 months ago

Indexing and tools like llamaindex use LLM generated embeddings to “intelligently” search for similar documents to a search query.

Those documents are usually fed into an LLM as part of the prompt (eg. context)

[–] [email protected] 3 points 9 months ago

Hey yknow that’s a good point.

[–] [email protected] 5 points 9 months ago* (last edited 9 months ago) (2 children)

Yes, you can craft your prompt in such a way that if the llm doesn’t know about a referenced legal document it will ask for it, so you can then paste the relevant section of that document into the prompt to provide it with that information.

I’d encourage you to look up some info on prompting LLMs and LLM context.

They’re powerful tools, so it’s good to really learn how to use them, especially for important applications like legalese translators and rent negotiators.

[–] [email protected] 9 points 9 months ago* (last edited 9 months ago) (5 children)

Generally, training an llm is a bad way to provide it with information. “In-context learning” is probably what you’re looking for. Basically just pasting relevant info and documents into your prompt.

You might try fine tuning an existing model on a large dataset of legalese, but then it’ll be more likely to generate responses that sound like legalese, which defeats the purpose

TL;DR Use in context learning to provide information to an LLM Use training and fine tuning to change how the language the llm generates sounds.

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago) (1 children)

You didn’t present any ideas or solutions to argue against. There’s no argument happening here.

Nor are there strawmen because there’s no argument being made.

You said that there’s generally a lack of imagination with regards to this stuff and I was just sharing my opinions as to why.

[–] [email protected] 4 points 9 months ago* (last edited 9 months ago) (3 children)

I think most people (correctly imo) don’t see how a large enough company can operate without some hierarchy, which seems to run up against the idea of being entirely equally employee owned.

There’s always going to be leaders (manager or just someone who others listen to) That person necessarily has more responsibility and control than his peers and is justly compensated more (otherwise nobody would put in extra work, say, to train as an engineer or doctor)

That person has their own interests that don’t always line up with the company and may use their influence to guide the company in a way that benefits them.

Suddenly you have a worker class and a bourgeois-esque class.

Most people (incorrectly imo) think that the “unbiased” checks and balances in government counteract that.

If there’s another option that accounts for hierarchies in large employee owned and operated companies let me know…. please

EDIT: large as in number of employees

[–] [email protected] 9 points 9 months ago (2 children)

It def adds some flavor to the social media political scene

[–] [email protected] 10 points 9 months ago* (last edited 9 months ago)

Looks like they got that number from this quote from another arstechnica article ”…OpenAI admitted that its AI Classifier was not "fully reliable," correctly identifying only 26 percent of AI-written text as "likely AI-written" and incorrectly labeling human-written works 9 percent of the time”

Seems like it mostly wasn’t confident enough to make a judgement, but 26% it correctly detected ai text and 9% incorrectly identified human text as ai text. It doesn’t tell us how often it labeled AI text as human text or how often it was just unsure.

EDIT: this article https://arstechnica.com/information-technology/2023/07/openai-discontinues-its-ai-writing-detector-due-to-low-rate-of-accuracy/

[–] [email protected] 1 points 9 months ago

Complicated issues are complicated. Neither Reddit, lemmy, Twitter (x?), nor any social media platform is particularly well suited towards discussing complex decisive topics.

[–] [email protected] 7 points 9 months ago

Probably money. Given enough money, I’m sure tiktok will ban any search term

 

I get meta evil, but aren’t we just blocking out any users from accessing the wider fediverse?

view more: next ›