this post was submitted on 11 Sep 2023
8 points (72.2% liked)
Autism
6900 readers
51 users here now
A community for respectful discussion and memes related to autism acceptance. All neurotypes are welcome.
We have created our own instance! Visit Autism Place the following community for more info.
Community:
Values
- Acceptance
- Openness
- Understanding
- Equality
- Reciprocity
- Mutuality
- Love
Rules
- No abusive, derogatory, or offensive post/comments e.g: racism, sexism, religious hatred, homophobia, gatekeeping, trolling.
- Posts must be related to autism, off-topic discussions happen in the matrix chat.
- Your posts must include a text body. It doesn't have to be long, it just needs to be descriptive.
- Do not request donations.
- Be respectful in discussions.
- Do not post misinformation.
- Mark NSFW content accordingly.
- Do not promote Autism Speaks.
- General Lemmy World rules.
Encouraged
- Open acceptance of all autism levels as a respectable neurotype.
- Funny memes.
- Respectful venting.
- Describe posts of pictures/memes using text in the body for our visually impaired users.
- Welcoming and accepting attitudes.
- Questions regarding autism.
- Questions on confusing situations.
- Seeking and sharing support.
- Engagement in our community's values.
- Expressing a difference of opinion without directly insulting another user.
- Please report questionable posts and let the mods deal with it. Chat Room
- We have a chat room! Want to engage in dialogue? Come join us at the community's Matrix Chat.
.
Helpful Resources
- Are you seeking education, support groups, and more? Take a look at our list of helpful resources.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Generally, training an llm is a bad way to provide it with information. “In-context learning” is probably what you’re looking for. Basically just pasting relevant info and documents into your prompt.
You might try fine tuning an existing model on a large dataset of legalese, but then it’ll be more likely to generate responses that sound like legalese, which defeats the purpose
TL;DR Use in context learning to provide information to an LLM Use training and fine tuning to change how the language the llm generates sounds.
I know nothing about “in context learning” or legal stuff, but intuitively, don’t legal documents tend to reference each other, especially the more complicated ones? If so, how would you apply in context learning if you’re not aware which ones may be relevant?
Yes, you can craft your prompt in such a way that if the llm doesn’t know about a referenced legal document it will ask for it, so you can then paste the relevant section of that document into the prompt to provide it with that information.
I’d encourage you to look up some info on prompting LLMs and LLM context.
They’re powerful tools, so it’s good to really learn how to use them, especially for important applications like legalese translators and rent negotiators.
thanks for your answer! Is this same or different from indexing to provide context? I saw some people ingesting large corpus of documents/structured data, like with LlamaIndex. Is it an alternative way to provide context or similar?
Indexing and tools like llamaindex use LLM generated embeddings to “intelligently” search for similar documents to a search query.
Those documents are usually fed into an LLM as part of the prompt (eg. context)