this post was submitted on 12 Oct 2024
1 points (100.0% liked)

Perchance - Create a Random Text Generator

446 readers
20 users here now

โš„๏ธŽ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 1 year ago
MODERATORS
 

Hello, all!

I've been using perchance's ai-character-chat and text-to-image services for a few months now, and while I do highly enjoy them I do worry that much use could potentially harm the service, due to usage to ad ratio and otherwise. Beyond that point, I am a very private person and while I enjoy the service, I can't help but worry that my messages and network requests could be viewed by my ISP or similar. While of course not illegal or anything of the sort, I still don't enjoy the thought of my ISP or other people potentially viewing content that is not needed. Though, that may just be me and my paranoia talking. ๐Ÿ˜…

My resolution for this was to design my own local service, running entirely offline (for personal use only) but I've ran in to a problem. Out of all AI chat services I've seen, perchance has one of the best in terms of comprehension and response, in my personal opinion at least. While trying to replicate that, I can't seem to get the responses to be nearly as good, regardless of parameters.

While this is an unusual request to ask, and of course completely up to your discretion, if you're comfortable could you please share with me the details of the model used in your generation and parameter values/ranges? If not, completely okay, just a request to aid me in my personal project. So far, I've reached the conclusion of using a temperature range of 0.7-0.8 with Llama2-13B-GPTQ from TheBloke on HF.

top 3 comments
sorted by: hot top controversial new old
[โ€“] perchance 2 points 1 month ago* (last edited 1 month ago) (1 children)

Unfortunately a 13B model probably isn't going to cut it. Perchance uses a popular open source 70B Llama-based model (you'll come across it's name almost immediately if you look at top model lists, but any of the top models will work fine - and you should use the recommended parameters in the HuggingFace repo). If you can't run a 70B models, then I'd recommend these two places to find a 30B/20B/13B model to suit your specific use case, depending on your GPU size:

This community is not well-suited to helping you get it set up, but the above two communities have lots of info.

[โ€“] sereth 3 points 1 month ago

Thank you for your reply! Yeah, I did have a feeling that I'd need to run a 70B Llama-based model, but I eventually ended up using a combination of 13B and 7B parameter models that dynamically switch, which somehow actually seems to work pretty good oddly enough. Your response was very helpful, and I appreciate your time to respond to this. <3

[โ€“] wthit56 1 points 1 month ago