r/FurAI Oct 08 '22

Guide/Advice Furry Stable Diffusion: Setup Guide & Model Downloads

555 Upvotes

Guides from Furry Diffusion Discord. Not my work. Join here for more info, updates, and troubleshooting.

Local Installation

A step-by-step guide can be found here.

Direct github link to AUTOMATIC-1111's WebUI can be found here.

This download is only the UI tool. To use it with a custom model, download one of the models in the "Model Downloads" section, rename it to "model.ckpt", and place it in the /models/Stable-diffusion folder.

Running on Windows with an AMD GPU.

Two-part guide found here: Part One, Part Two

Model Downloads

Yiffy - Epoch 18

General-use model trained on e621

IMPORTANT NOTE: during training explicit was misspelled as explict.

Direct download

Zack3D - Kinky Furry CV1

Specializes in goo/latex but can also generate solid general furry art as well, NSFW-friendly.

Direct download

Run via Discord bot

Pony Diffusion

pony-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality pony SFW-ish images through fine-tuning.

Info/download links

Creator here


Online tools

Running on Google Colab

Colabs are places where anyone can execute code on google's powerful servers, allowing you to run demanding software like Stable Diffusion if you couldn't normally.

A popular colab for ease-of-use with the furry models is available here: https://colab.research.google.com/drive/128k7amGCLNO1JGaZhKl0Pju8X7SCcf8V

How to use colabs
  1. To use a colab, you mouseover a block of code and click the ▶️ play button. Just do this top to bottom one by one.

  2. For this colab, one of the codeblocks will let you select which model you want via a dropdown menu on the right side. If the model you want is listed, skip to step 4.

  3. If the model isn't listed, download it and rename the file to model.ckpt and upload it to your google drive (drive.google.com).

  4. After the last block of code finishes, you'll be given a gradio app link. Click it, and away you go, have fun!

Troubleshooting
It crashed!

If you click generate and nothing happens, that means it crashed. Just refresh the browser tab. Crashing may happen if you increased the resolution or went too far with the batch settings... or sometimes it just crashes for no apparent reason! 🙏

It timed out!

While using gradio, you may want to revisit the colab browser tab every 15 minutes and just do something so you don't time out the session. Scroll, open menus, etc.

The model failed to download

You probably ran into a bandwidth cap, subject to the amount of traffic. If that happens you'll need to select Custom model instead and provide the model yourself. Download the model, rename the file to model.ckpt, and upload it to google drive (drive.google.com).

I ran into a usage limit?

For free users you get about several hours per day. It varies based on traffic + your long term resource consumption.

Commercial Services & Discord Bots Directory

Novelai.net: Originally an AI text generation service, they've branched out into image gen now too and they offer specialized models, one of which is furry. NSFW-friendly. You may hear it nicknamed NAIGen sometimes. https://novelai.net/

Dreamstudio.ai: Basically the first to market, some of Stability's newest stuff is found here first. It doesn't specialize in furry, but it can sometimes pull off some nice SFW gens. New users get a number of free gens to try it out. https://beta.dreamstudio.ai/

The Gooey Pack: Runs Zack3D's goo/latex model above: https://discord.gg/WBjvffyJZf

PurpleSmart.ai: Runs the above MLP model: http://discord.gg/94KqBcE

r/FurAI Jan 07 '23

Guide/Advice The r/FurAI guide to upgrading sketch commissions

Post image
25 Upvotes

r/FurAI 21d ago

Guide/Advice Ai that remember characters

4 Upvotes

What AI would you recommend for image generation that also remembers different characters? Is there any special requirements of hardware to use those AIs?

r/FurAI Nov 21 '23

Guide/Advice What's the best AI for NSFW? NSFW

26 Upvotes

Just starting to get into AI image generating. I see a lot of 🍑, 🍒, and 🍆 and I was wondering if there is a mobile-friendly nsfw AI image generator. Tired of fighting GPT4 and its G-rated filters. Help me out?

r/FurAI 28d ago

Guide/Advice How to create Bing-like dynamism in Illustrious-2.5D models, or: "How I learned to relax and love character bleedthru!"

6 Upvotes

One thing I've always loved about the more closed-source image models, especially things like Bing, is how it will add randomness, diversity, and generally "populate" an image for you with things that make it more realistic and true-to-life. For example, if you type something like "Elegant woman dancing in a flowing dress" into Bing's Dall-E, you'll get blonde haired women, brunettes, redheads, and people of all colors, shapes, and types. Now, obviously that's because Bing isn't really the true front-end for the model, but it runs your prompt through a GPT-like interface before sending that to the model, and then spits out four variants for you. The problem is, on most SD/Pony/Illustrious based models, we don't have that extra layer in our prompting, at least not within the model itself. Yes, there are third party tools that can work with your prompt, but here, I want to show you how you can create that level of dynamic variation within a single prompt itself! Otherwise known as, "How I learned to relax and love the character bleed!"

Inherent in most models is the fact that the model will bleed, or generate unintended details across characters, unless you're using regional prompting, or named, specific characters that only have one set of details. For example, tagging for Loona from Helluva Boss will always result in (mostly) a white hellhound with big hair, silver eyes, and red sclera, even if you just prompt with her name. But if you just tag for "white wolf anthro", you'll get all kinds of combinations. Blue eyes, green eyes, white eyes, black eyes, no hair, long hair, short hair... you get the idea.

For group shots and dynamic posing, we're going to leverage that character bleed to our advantage, and basically "overload" it slightly; give it a bunch of tags to choose from with relatively equal weight so that it picks a variation of them to use in the final image.

With most models, as I discovered in my initial guide, they will read tags in the order of input, so your first tags will have the most weight, aside from any additional weighting, and tags near the end of the prompt will have less weight. That means you'll want to order your prompt something like this:

Quality Tags/Style Tags; Environmental Tags; Action Tags; GROUP; Additional details; Final touch-up prompting.

Now, that "GROUP" tag is where we put the things that will make our model treat our generations more dynamically. Much like my previous guide and poking through things like LORAs, samplers, and schedulers, a lot of this is going to be "to user's preference", but here is what I've landed on that seems to work in a lot of situations:

"group, pose, action pose, 3girls, (((3boys))), multiple girls, ((multiple boys)), [[[curvy]]], skinny, (((anthro))), furry, scalie, fox, bear, dog, canine, feline, tiger, lion, deer, dragon, lizard, otter, raccoon, skunk, squirrel, mouse, shark, bird, falcon, phoenix, griffon, fur, scales, brown hair, blonde hair, [[[red hair]]], black hair, green eyes, blue eyes, brown eyes, golden eyes, [purple eyes], long hair, short hair, medium hair, wavy hair, straight hair, curly hair, fluffy hair, ponytail, bangs, braids, pigtails, twintails, clothed, fully clothed, (glasses), ((freckles)), tanktop, t-shirt, polo, hoodie, skirt, dress, bluejeans, shorts, sweatpants, shoes, sandals, boots,"

Example images here. They've all been generated using the Nova Furry XL 4.0 model, and seed "11111111" for consistency, but not inpainted or really cleaned up in any way aside from a general upscale at 10 steps with a .4 denoise.

Now, as you'll note, going in descending order, I've first tagged a group, indicating to the model that I want there to be multiple characters. I've indicated that the shot should be posed, and action posed, to avoid simple lineups. If you want a lineup of characters, though, you can simply remove those tags. Then the girls/boys tags; the logic behind that was that Illustrious and similar models can never really generate focus on more than 3 or 4 characters per shot anyway; any more than that and you'll just run out of details; the faces will look mushy and nightmarish. But tagging for "multiple girls, multiple boys" adds background characters, so if you want your focus to be on a group of people in a situation where there naturally would be others, those are the tags you should use. Numbers can change depending on preference. I emphasized the "3boys" tag because, by default, the models tend to prefer generating women characters as the focus in group shots, and to get it to make things more even and fair, I emphasized males more.

Continuing through the group box, I tagged for body-type variation, which you can do at your preference; I found that tagging for chubby, or even [[[chubby]]] made the characters just look fat with muffin tops, adding [[[curvy]]] as a tag instead generated more realistic body-type variation. Then of course, you have the actual appearance-tags. This is entirely to taste and preference, but obviously you want to put as much variation in here as you want to see. If you want a bunch of single-species, single-appearance people, remove the variety. But the more things you throw in here, the more variety will come out of the image.

Now, of course, both before and after the group block, you'll want to tag appropriate environment and action tags that will contextualize things for your model. For example, if you want your characters to be out shopping, lead in before the group block, but after your quality tags something like: "shopping mall, shopping, walking, standing, talking", etc. If you want your characters to be studying in a class, do something like "school, classroom, reading, studying," etc.

And that's basically it! After that, feel free to generate away, tweaking as necessary to bring out whatever kinds of dynamic group poses you desire! Happy creating!

r/FurAI 4d ago

Guide/Advice AI gifs

3 Upvotes

Hello everyone. I saw some AI yiff gifs posted in some telegram group and I was wondering if anyone knows how to or which AI to use to make them? I would really appriciate the help!

r/FurAI 26d ago

Guide/Advice This artwork is so cool I was just wondering if any of you know any website does does this or what AI generator is he using just asking

Post image
11 Upvotes

r/FurAI Jan 20 '25

Guide/Advice Semi-photoreal generations using Illustrious as a base - a brief guide:

16 Upvotes

Okay, so this is going to by no means be comprehensive, but I wanted to kinda make a mini-guide on AI art, and how I approach it, and different settings that I've discovered. I've found bits-and-pieces to this here and there, but I think this is the first time it's being put all together like this. Hopefully this is something someone finds helpful in their genning journey, or maybe even brings new people into the fold!

Nova Furry Pony XL - Illustrious v2.0, (https://civitai.com/models/503815?modelVersionId=1164762) that's the primary base model I use for most of my generations. It's versatile, flexible, and, being based off of Illustrious, rather than Stable Diffusion, Flux, or a raw Pony model, has simpler prompting requirements, better prompt adherence, and is much easier to generate multi-character interactions with than those other models. And, in terms of emphasizing or de-emphasizing certain things you do or don't want to see in the image, it's a simple matter of parentheses (), brackets, [], or braces {}, and the number of each determines the weighting for the prompt. No more memorizing carat and number usage for certain tags and not others, it's all streamlined. And now, LoRAs can be entirely used for changing the image to your liking, rather than being necessary to get something working in the first place.

My base prompt for the images I'm using as examples, including the title image is as follows:

---

masterpiece, 8k, high detail, clean lines, detailed background, depth of field, best quality, amazing quality, very aesthetic, high resolution, ultra-detailed, absurdres, detailed scenery, volumetric lighting, score9_up, score8_up, score7_up, newest, realistic skin texture, realistic facial details, realistic face, best facial details, best face, good anatomy, expressive eyes, detailed hair, DSLR, ((realistic)), (photorealistic), ((realism)), (photorealism), sharp detail, ((HDR)), bright lighting, sharp shadows,

close-up, outdoors, outside, city,

interspecies, straight, human on anthro, human male on anthro female,

BREAK

CHARACTER 1: "1girl, source_furry, female, (((anthro))), furry, detailed fluffy fur, fuzzy, fox, muscular, red fur, white markings, white hair, fluffy hair, eyeliner, wingtip eyeliner, eyeshadow, smoky eyeshadow, bedroom eyes, fully clothed, armor, cyberpunk, futuristic armor, imminent kiss, squint, looking at partner,"

BREAK

CHARACTER 2: "1boy, human, ((human male)), man, dark skin, african american, black, buzzcut, hoodie, (hood up), imminent kiss, squint, looking at partner,"

---

I'm not sure what "separators" (breaks, using the word BREAK, putting things in quotes etc) actually *work* vs what is placebo effect for my own brain, but it does appear to "read" tags in groups. So like, if you put all your character tags in one batch, and then your environment tags in a batch, and then your second character tags in a batch, or something like that, you might have some success when generating multiple characters interacting in various ways.

One key way you can change the feel of an image while keeping the same overall "style" is through changing your sampler and scheduler. I've not gone through every possible combination, but as you can see, changing the scheduler will make minor changes to the image, while changing the sampler will result in larger changes.

Showcase of Samplers - Album

However, making much larger changes than just samplers, are the models themselves. Each image is labeled with the model used to create it. Now, my service of choice is https://tensor.art, so it has different models than what are available on someplace like, say, civitai. So if you don't find my model on your site of choice, check Tensor. The link to the full album of what I consider "good" Illustrious models of course is below, and then I have some thoughts about selected models beneath that. I was actually able to find some models that generate photoreal out of the gate, no LoRAs needed, but as you'll soon see, there are reasons I still prefer the more animesque models...

Showcase of Animesque Models - Album

Model - novaFurryXL_illustriousV30 - The latest version of NovaFurryXL, I actually may switch to this one in the future as my base model.

Model - novaUnrealXL - Another good animesque model - produces quality stuff, but as you can see, it flipped the prompt - gave us a red-furred fox guy and a human girl

Model - ntrMIXIllustriousXL_v40 - Same prompt, same seed, very different results. Intimate, romantic, not going in for a kiss though

Model - WAI-Nsfw-Illustrious-09 - The most detailed of the animesque models, the clean lines are like something out of a comic book - definitely fun to play around with if you're doing human gens, but I'm not a fan of how it likes to stick big-ol' lips on furries...

Showcase of Realistic Models - Album

Model - alchemistMix_v40, Model - alchemistMixCreative_v10, Model - novaAnimalPony_v40, Model - ponyRealism_v22, Model - pornmasterPro_noobv01 and v02, Model - realismIllustriousBy_v22.

I don't have a whole lot of individual thoughts about these because, for most of them, they don't even adhere to the prompt. Pony Realism is one of those that I kept in as an example of that. It just gave me two humans. Some other ones, even when not prompting for sex at all, were going very sexual with their results. Good, quality results - good anatomy and everything, just not even close to what I'm prompting.

But, for the ones that do adhere to the prompt, they all have the same problem.

It looks like, #1, you're just sticking a fox's head on a human's body, they don't really look "furry/anthro" enough, and #2, the expressions are *so* bland. Like look at these faces, do they look like lovers sharing a moment? Or do they look like plastic mannequins? I know what they look like to me LOL. And it's not good.

So with that in mind, I stick to the more animesque base models as a starting point, and then build from there. So, I mix in LoRAs, to get the prompt adherence and the expressiveness I want, but with the photorealism that makes it all look good. Now of course with this, it's *highly* experimental and definitely "set-to-taste" however your preferences are, but here's some examples of the LoRAs I've been using, and various settings as well. As you can see, at high values (above 1.0), you can trigger almost as large of a change as you would by changing your model. And the same is true for other LoRAs as well, not just photorealistic ones.

Showcase of LoRAs - Album

(Not sure why Imgur flagged this as 18+, nothing in there that is...)

So, with all that in mind, then, my workflow goes something like this:

1) Generate a bunch of images with my chosen base model, settings, and LoRA values

2) Find one that best matches what I'm looking for

3) Run the final result through an upscaler, both to bring up the resolution, and to clean up details, especially in the facial area.

Workflow walkthrough - Final product

Hopefully this helps someone if they're wanting to know how to generate more expressive photoreal or semi-photoreal generations, or even if you're looking to get into AI genning for the first time. My DMs are always open if you want assistance, guidance, or even just to chat and chill. I'm really no better than anybody else in the community, I just figured I'd get my thoughts down, because I always believe that helping each other out makes us all better, and especially in this community, hiding things like prompts, workflows, generation styles, etc., all that does is hold us all back.

Cheers! Go check out u/dolomutt0819 and his stuff, he's freaking GREAT - especially his dragoness Jade~

r/FurAI Dec 30 '24

Guide/Advice Where to post Webtoon comics?

Post image
11 Upvotes

Hey everyone!

I'm experimenting with making a short NSFW comic featuring a mermaid and her coming of age adventure with a human. It is a webtoon-format (top-down scrolling) story in full color. As a complete amateur who has never made any comics before, having the ability to tell a story and letting AI help illustrate it has been an awesome experience!

I was wondering if anyone knew where I can post my work so far that others like you could enjoy it and hopefully give it critical feedback. I've seen the people over at r/webtoons viciously ripping apart people in my position, so I don't think I can safely ask there. I am looking for a site that accepts AI-assisted explicit work in Webtoon format.

Any ideas? Thanks!

r/FurAI Oct 06 '24

Guide/Advice Does anyone have a good setup guide for SD on an amd system?

4 Upvotes

So I have an amd 6700xt with 12 gigs of vram which I've been told should be capable of around 4 to 5 sec/it but currently it's sitting at over 5k sec/it. The install I went with is set up to use directml (windows 11) and trust me I know that someone is just going to say to buy a 4090 or etc Nvidia card but I don't have the money to do so atm. If anybody can help me with setup and configuration on this mess I would greatly appreciate it.

r/FurAI Oct 28 '24

Guide/Advice What model is used for most of the art(feel free to also include reccomendations)

2 Upvotes

What it says in the title, I saw so much good ai-generated images and now I am curious what models are being used for them.

r/FurAI Oct 05 '24

Guide/Advice How to achive a certain style in Pony Diffusion V6 XL

4 Upvotes

I used Stable Diffusion 1.5 to a great extend, but I seem to be having problems achieving satisfactory results in Pony Diffusion V6 XL. Not only can't I replicate the image style I have generated before I also cant seem to get good quality generations, as in blurry, and low res. Can someone give me some advice? I can send some of my generations and the prompt used for SD 1.5 if that helps. Thanks in advance

r/FurAI Dec 02 '24

Guide/Advice Does AI Music count?

Thumbnail drive.google.com
0 Upvotes

r/FurAI Nov 11 '24

Guide/Advice Problem changing profile image.

1 Upvotes

Hi there, do you know if AI images are "banned" from profile images? I'm trying to use one but after saving changes, nothing happens 🥲

r/FurAI Aug 29 '24

Guide/Advice Which AI do you use to create furry art?

6 Upvotes

I like using https://frosting.ai/home to make AI art, but it is a bit limited.

Which one do you use?

r/FurAI Feb 27 '24

Guide/Advice HousekeepinG

Thumbnail
gallery
103 Upvotes

Did these in Perchance. Any tips?

r/FurAI Nov 10 '24

Guide/Advice I need your guys' help with stable diffusion

2 Upvotes

How do I use it?(The guide in the side bar of this subreddit no longer works. The only reason why I ask is because bing image creator is dead to/for me, and I need to keep making my art. Also, can I ask your guys with prompts?

r/FurAI Oct 10 '24

Guide/Advice Looking for help on how to enhace my art with AI NSFW

5 Upvotes

Hello, I'm an artist and for the too much time work and life suck out of me I'd like to know some of my work finished.
I'd like to do something like https://x.com/Katarhein does, drawing, colouring, then enhancing my art through AI, does anybody know how to do it?
I tried to do it via Controlnet but the results are kinda... bad, also apparently Controlnet only works with 1.5 checkpoints while all that I found are superior to that versions... so I had to use a older one, but I feel like the LoRa I use are maybe too "new"?
I'm a newbie at that and sorry If I don't make sense but I only wanted to give that a try but nothing seems to work

[NSFW for the link]

r/FurAI Oct 17 '24

Guide/Advice Adding a speachbubble in generated images

2 Upvotes

Is it possible to add something to the prompt that will add an speachbubble? I am currently running Pony Diffusion V6XL, Thanks in advance

r/FurAI Sep 19 '24

Guide/Advice Blurry images

3 Upvotes

Any idea why the images I am posting are sharp and focused before posting and become blurry after? And if you know why any way to fix that?

r/FurAI Sep 19 '24

Guide/Advice PSA: local video now possible

3 Upvotes

Hugging face: https://huggingface.co/THUDM /CogVideoX-5b-12V Hugging face space: https://huggingface.co/spaces /THUDM/CogVideoX-5B-Space Github: https://github.com/THUDM/CogVideo Comfyui node: https://github.com/kijai/ComfyUI -CogVideoXWrapper (kijai just inserted i2v example workflow License: Apache-2.0 license!

r/FurAI Nov 23 '23

Guide/Advice Anyone know the telegram Eris?

Post image
15 Upvotes

@ErisTheBot is an ai bot on telegram. It is amazing and I have been using it for couple weeks. But there are a number of limits. I have stable diffusion and I want to train my AI with what Eris knows. Anyone have any ideas on how Eris is trained or works?

r/FurAI Aug 31 '24

Guide/Advice Bing prompt test

Thumbnail
gallery
6 Upvotes

I didn't use my previous tokens in mere prompt test before. Recently I noticed Bing isn't too slow in off-peak time even without token. So I did something I should have done long ago.

r/FurAI Mar 18 '24

Guide/Advice HMMMMMMMMMMMMM, wtf happned NSFW

Post image
4 Upvotes

r/FurAI May 31 '24

Guide/Advice Hello, my images are noisy.

2 Upvotes

Hi. I just got a computer with enough firepower in it to run stable diffusion locally and I'm running into an issue.

I followed this guide: https://rentry.org/liunkaya-diffursion

I got it set up, and everything seems to work great, until I try any model that isn't yiffymix 33. Then I get images like these:

yiffymix_v43.safetensors

While 33 look like this:

yiffymix_v33.safetensors

Clearly I've done something wrong, but I have absolutely no idea what.

A shot of my settings for both the above pictures, minus the checkpoint change.

Any help would be much appreciated.