magn418

joined 1 year ago
MODERATOR OF
[–] [email protected] 6 points 1 month ago* (last edited 1 month ago) (1 children)

I'd say for once don't push yourselves. You don't have to do every sex technique just because other people do it. If neither of you likes it, just let it go and focus on things you like. And if you want to do it, maybe take it slow. Let the person who is overwhelmed lead the pace. Agree on some signals and cues. Don't be disappointed. Just stop and change to something different. It's alright if it only lasts for a short moment. Maybe you can work your way up. But don't push. Sex is about enjoying it, not do something specific.

And if you like to play games:

https://bettymartin.org/videos/

That's about learning to give and receive. About setting boundaries and learning each other's level of comfort. Maybe it helps. She has a free game(PDF) further down on that page.

[–] [email protected] 2 points 1 month ago

Sure, public enemy number one, inflatable edition.

[–] [email protected] 2 points 3 months ago* (last edited 2 months ago)
[–] [email protected] 2 points 4 months ago
[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

https://lemmynsfw.com/post/4048137

I'd say try MythoMax-L2 first. I think it's a pretty solid allrounder. Does NSFW but also other things. Nothing special and not the newest any more, but easy to get going without fiddling with the settings too much.

If you can't run models with 13B parameters, I'd have to think which one of the 7B models is currently the thing. I think 7B is the size most people play around with and produce new finetunes, merges and what not. But I also can't keep up with what the community does, every bit of information is kind of outdated after 2-4 weeks 😆

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago) (3 children)

I assume (from your user handle) that you know about the allure of roleplaying and diving into fantasy scenarios. AI can do it to some degree. And -of course- people also do erotic roleplay. I think this always took place. People met online to do this kind of roleplay in text chats. And nowadays you can do it with AI. You just tell it to be your synthetic maid or office affair or waifu and it'll pick up that role. People use it for companionship, it'll listen to you, ask you questions, reassure you... Whatever you like. People also explore taboo scenarios... It's certainly not for everyone. You need a good amount of imagination, everything is just text chat. And the AI isn't super smart. The intelligence if these models isn't quite on the same level as the big commercial services like ChatGPT. Those can't be used as they all banned erotic roleplay and also refuse to write smutty stories.

I agree with j4k3. It's one of the use-cases for AI I keep coming back for. I like fantasy and imagination in connection with erotics. And it's something that doesn't require AI to be factually correct. Or as intelligent as it'd need to be to write computer programs. People have raised concerns that it's addicting and/or makes people yet more lonely to live with just an AI companion... To me it's more like a game. You need to pay attention not to get stuck in your fantasy worlds and sit in front of your computer all day. But I'm fine with that. And I'm less reliant on AI with that, than people who use AI to sum up the news and believe the facts ChatGPT came up with...

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago) (1 children)

Hehe. It's fun. And a different experience every time 😆

I don't know which models you got connected to. Some are a bit more intelligent. But they all have their limits. I also sometimes get that. I roleplay something happening in the kitchen and suddenly we're in the livingroom instead. Or lying in bed.

And they definitely sometimes have the urge to mess with the pacing. For example that now is the time to wrap up everything in two sentences. It really depends on the exact model. Some of them have a tendency to do so. It's a bit annoying if it happens regularly. The ones trained more on stories and extensive smut scenes will do better.

The comment you saw is definitely also something AI does. It has seen text with comments or summaries underneath. Or forum style conversations. Some of the amateur literature contains lines like 'end of story' or 'end of part 1' and then some commentary. But nice move that it decided to mock you 😂

Thanks for providing a comparison to human nsfw chats. I always wondered how that works (or turns out / feels.) Are there dedicated platforms for that? Or do you look for people on Reddit, for example?

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago) (1 children)

The LLMs use a lot of memory. So if you're doing inference on a GPU you're going to want one with enough VRAM. Like 16GB or 24GB. I heard lots of people like the NVidia 3090 Ti because that graphics card could(/can?) be bought used for a good price for something that has 24GB of VRAM. The 4060 Ti has 16GB of VRAM and (I think) is the newest generation. And AFAIK the 4090 is the newest consumer / gaming GPU with 24GB of VRAM. All the gaming performance of those cards isn't really the deciding factor, the somewhat newer models all do. It's mostly the amount of VRAM on them that is important for AI. (And pay attention, a NVidia card with the same model name can have variants with different amounts of VRAM.)

I think the 7B / 13B parameter models run fine on a 16GB GPU. But at around 30B parameters, the 16GB aren't enough anymore. The software will start "offloading" layers to the CPU and it'll get slow. With a 24GB card you can still load quantized models with that parameter count.

(And their professional equipment dedicated to AI includes cards with 40GB or 48GB or 80GB. But that's not sold for gaming and also really expensive.)

Here is a VRAM calculator:

You can also buy an AMD graphics card in that range. But most of the machine learning stuff is designed around NVidia and their CUDA toolkit. So with AMD's ROCm you'll have to do some extra work and it's probably not that smooth to get everything running. And there are less tutorials and people around with that setup. But NVidia sometimes is a pain on Linux. If that's of concern, have a look at RoCm and AMD before blindly buying NVidia.

With some video cards you can also put more than one into a computer, combine them and thus have more VRAM to run larger models.

The CPU doesn't really matter too much in those scenarios, since the computation is done on the graphics card. But if you also want to do gaming on the machine, you should consider getting a proper CPU for that. And you want at least the amount of VRAM in RAM. So probably 32GB. But RAM is cheap anyways.

The Apple M2 and M3 are also liked by the llama.cpp community for their excellent speed. You could also get a MacBook or iMac. But buy one with enough RAM, 32GB or more.

It all depends on what you want to do with it, what size of models you want to run, how much you're willing to quantize them. And your budget.

If you're new to the hobby, I'd recommend trying it first. For example kobold.cpp and text-generation-webui with the llama.cpp backend (and a few others) can do inference on CPU (or CPU plus some of it on GPU). You can load a model on your current PC with that and see if you like it. Get a feeling what kind of models you prefer and their size. It won't be very fast, but it'll do. Lots of people try chatbots and don't really like them. Or it's too complicated for them to set it up. Or you're like me and figure out you don't mind waiting a bit for the response and your current PC is still somewhat fine.

[–] [email protected] 10 points 6 months ago* (last edited 6 months ago)

Can you tell us a bit more about it?

  • What kind of LLM do you use?
  • How do you do the prompting? One-shot? Would you like to share that?
  • Do you keep the generated stories private?
[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

Thanks, yeah this definitely very useful to me. Lots of stuff regarding this isn't really obvious. And I've made every mistake that degrades the output. Give conflicting instructions, inadvertently direct things into a direction I didn't want and it got shallow and predictable. Or not set enough direction.

Briggs Myers

I agree, things can prove useful for a task despite not being 'true' (in lack of a better word). I can tell by the way you write that you're somewhat different(?) than the usual demographic here. Mainly because your comments are longer and focused on detail. And it seems to me you're not bothered with giving "easy answers", in contrast to the average person who is just interested in getting an easy answer to a complex problem. I can see how that can prove to be incompatible at times. In real-life I've always done well by listening to people and then going with my gut feeling concerning their personality. I don't like judging people or putting them into categories since that doesn't help me in real-life and narrows my perspective. Whether I like someone or want to listen to them, for example for their perspective or expertise, is determined by other (specific) factors and I make that decision on a case-by-case basis. Some personality traits often go alongside, but that's not always the case and it's really more complex than that.

Regarding story-writing it's obviously the other way around. I need to guide the LLM into a direction and lay down the personality in a way the model can comprehend. I'll try to incorporate some of your suggestions. In my experience the LLMs usually get the well-known concepts including some of the information the psychology textbooks have available. So, I haven't tried yet, but I'd also conclude that it's probably better to have it deduct things from a BM personality type than describing it with many adjectives. (That's what I've done to this point.)

In my experience the complexity starts to piles up if you do more than the obvious or simple role-play. I want characters with depth, ambivalence... And conflict is what drives the story. Back when I started tinkering with AI, I've done a submissive maid character. I think lots of people have started out with something like that. And even the more stupid models can easily pull that off. But you can't then go on and say the character is submissive and defiant at the same time, it just confuses the LLM and doesn't provide good results... I'm picking a simple example here, but that was the first situation where I realized I was doing it wrong. My assessment is that we need some sort of workaround to get it into a form that the LLM can understand and do something with it. I'm currently busy with a few other things but I'll try introducing psychology and whether the other workarounds like shadow-characters you've described prove useful to me.

If you pay very close attention to each model, you will likely notice how they remind themselves [...]

Yes, I've observed that. It comes to no surprise to me that LLMs do it, as human-written stories also do that. Repeat important stuff, or build a picture that can later be recalled by a short mention of the keywords. And that's in the training data, so the LLMs pick up on that.

With the editing it's a balance. It picks up on my style and I can control the level of detail this way, start a specific scene with a first sentence. But sometimes it seems I'm also degrading the output, that is correct.

the best way to roleplay within Oobabooga itself is to use the Notepad tab

I've also been doing that for some time now.

drop boundaries, tell it you know it can [...]

Nice idea. I've done things like that. Telling it it is a best-seller writer of erotic fiction already makes a good amount of difference. But there's a limit to that. If you tell it to write intense underground literature, it also picks up on the lower quality and language and quirks in amateur writing. I've also tried an approach like few-shot prompting, give it a few darker examples to shift the boundaries and atmosphere. I think the reason why all of that works is the same, the LLM needs to be guided where to orientate itself, what kind of story type it's trying to reproduce because they all have certain stereotypes, tropes and boundaries built in. Without specific instructions it seems to prefer the common way, remaining within socially acceptable boundaries, or just use something as an example for something that is wrong, immediately contrast ethical dilemmas and push towards a resolution. Or not delve into conflict too much.

And I've never deemed useful what other people do. Overly tell it what to do and what not to do. Especially phrasing it negatively "Don't repeat yourself", "Don't write for other characters", "Don't talk about this and that"... has never worked for me. It's more the opposite, it makes everything worse. And I see a lot of people doing this. In my experience the LLM can understand negative worded instructions, but it can't "not think of an elephant". Positively worded things work better. And yet better is to set the tone correctly, have what you want emerge from simple concepts and a concrete setting that answers the "why" and not just tells what to do.

I've also introduced further complexity, since I don't like spoon-feeding things to the reader. I like to confront them with some scenario, raise questions but have the reader make up their mind, contemplate and come up with the answers themselves. The LLMs I've recently tried know that this is the way stories are supposed to be written. And why we have open-ended stories. But they can't really do it. The LLMs have a built-in urge to answer the questions and include some kind of resolution or wrap-up. Or analyze the dilemmas they've just made up, focus on the negative consequences to showcase something. And this is related to the point you made about repeating information in the stories. If I just rip it out by editing it, it sometimes leads to everything getting off-track.

I'll try to come up with some sort of meta-level story for the LLM. Something that answers why the ambivalence is there, why to explore the realm beyond boundaries. Why we only raise questions and then not answer them. I think I need something striking, easy and concrete. Giving the real reason (I'm writing a story to explore things and this is how stories work,) doesn't seem to be clear enough to yield reliable results.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago) (2 children)

So... I've tested both models and I think I can see what you like about them. And -wow- I didn't get to try 70B models before and it's really a step up. With smaller models it's more mixed, sometimes they get a complex concept, sometimes they don't. And seems the 70B model is able to pick on a good amount of more complexity and it has the intelligence to understand more things and is then able to go in some proper direction. At least more often.

I'm not entirely sure if I can make good use of my new information... Writing erotic literature really isn't that easy. I've been tinkering around with AI assisted storywriting for some time now, and I never got good results that I'd like to share. I mean it can write simple smut... And regarding that: A quick thanks to you. I read your other comment over at !asklemmynsfw and I think I agree with your opinion on erotic stories. I've now included a specific instruction to my prompt to balance the story more, alike you said there. Focus on a good story, make it tingle but the porn has to be the icing on the cake. For now I've also instructed it to contrast both things, have a story that raises questions and is intellectual and provide a stark contrast with immersive acts and graphic description... "The skillful combination of both aspects is what makes this story excel." Let's see what the LLM can do with that instruction...

But storywriting really isn't easy. Even the 70B model is far from perfect. And to this point I didn't find a single model that can do everything. Some of them are intelligent but not necessarily good for stories. Some of them seem to have been trained on stories, they get the language right for such a thing, some overdo it. And not every model can write lewd stories, it's really obvious if a model has seen some erotic literature or simple smut or no such stories and just writes one or two abstract sentences, summarizing it, because it's never seen more detailed descriptions. And there is the pacing... I think local LLMs are still far away from being able to write stories on their own. Some consistently write like 10 paragraps and call it a novel. Almost all of them brush over things that would be interesting to explore, instead they focus on some other scene that's kind of boring. They write meaningless dialogue that would be alright if I was casually talking to a chatbot and role-playing my every-day life, but not very interesting here. They miss important stuff and make up random details later on. I mean half the models don't have a clue what is interesting to write and what can be skipped or summarized.

Another issue is trying to wrap up things (early) or pushing towards the end. Or doing super obvious plot twists. Sometimes this makes me laugh. But they're also very creative. I like that. Inbetween the (sometimes) bad writing there's often some interesting ideas or crazy creativity, things I wouldn't have thought of. Or other gems, single sentences that really get something on point.

I'm still exploring. I've tried different approaches, laying down a rough concept of a setting and then letting it do it. I've also tried being more methodical and giving it to them more like a homework assignment. Come up with ideas to explore... then with several plot ideas... then give critique to themselves, pick one and revise it... Come up with the characters... Then the main story arc, subplots, twists and important scenes, write down the table of contents and chapter names to get a structure for the novel... And then start the actual writing with all of that information laid out.

I think that's yielded the best results so far. I'm positive I'll get to at a point where I like the results enough to upload them. And write a guide how exactly I did that. Currently it's more or less me writing 80%, pausing the LLM after every second sentence, revising that and constantly pushing the story towards a better direction and fighting the level of detail the LLM deemed appropriate. I think I will get better. Turns out I've been using the wrong models anyways and relied too much on Psyfighter and such, which might be great for role-play dialogue. But with my recent test it turns out I don't really like their output when it gets to storywriting.

Edit: Yeah, and one thing more: It came up with a nice plot which I liked and explored further. And at some point the AI cited the 2018 science fiction movie it got all the ideas from 😆 That really made me laugh. Seems some of my ideas aren't that original. But getting some recommendations is nice, I'll just skip writing the story myself and watch the movie then.

view more: next ›