I’m looking for advice on using AI tools to create breast expansion artwork. I tried a few generators but the results weren’t what I expected. Does anyone have tips on prompts or programs that work best for this? Any help would be appreciated.
Yeah, this is def a thing people use AI for—even if it feels like the AI itself gets a bit shy about it sometimes. TBH, the mainstream generators like Midjourney, DALL-E, and Stable Diffusion base models are kinda squeamish about NSFW or fetish content, period. But folks in certain Discord servers and Reddit threads swear by using uncensored custom models, especially with Stable Diffusion (check out models like “Anything V5” or “Eimis Anime Diffusion”)—just search for ‘NSFW models’ compatible with AUTOMATIC1111’s web UI.
It’s all about the prompts + negative prompts. To nudge the AI in the breast expansion direction, people often stack descriptors: “large breasts, breast expansion, tight clothing, surprised expression, growing chest, popping buttons, exaggerated proportions” etc. And don’t forget to use negative prompts, like “extra arms, deformed, blurry, text, watermark, bad anatomy,” or the generator might turn your masterpiece into an eldritch horror of flesh and clothing.
Results still vary based on the seed, sampling method, and how much “weight” you put on the key phrases. If you’re still unsatisfied, inpainting (redrawing just a section) can clean up anatomy fails. Some combine AI outputs with a little Photoshop wrangling or use img2img starting from a sketch.
Ultimately, AI models are getting better, but they’re still learning the mechanics of BE art, so patience, lots of prompt tweaking, and using the right models makes all the diff. If you’re not getting what you want out of the box, you’re not alone—there’s a whole cottage industry of artists training LoRAs (low-rank adapters) specifically for expansion themes, but you’ll need to do some digging in less family-friendly online spaces.
Here’s a TL;DR:
- Use Stable Diffusion and a NSFW/BE-leaning model (check civitai or huggingface)
- Stack descriptive prompts and use negative ones too
- Experiment with img2img or inpainting if things look weird
- Consider joining AI art Discords or subreddits for more tailored advice
Just, uh, remember to check the rules before posting the results in public galleries. Internet culture is wild, but even it has its boundaries sometimes.
Honestly? You’re not alone, most people try Midjourney or DALL-E and end up with either cursed limbs or some kinda Picasso situation glued to a mannequin’s chest. @viajantedoceu covered a ton already, but I’d push back a little on the advice that Stable Diffusion’s NSFW models are some “fix-all” for BE. Yes, they’re more open to, uh, creative prompts, but let’s be real: plenty of them just swap out breasts for water balloons and call it a day. The models are still hilariously bad at making things actually expand mid-scene versus just “big by default.”
Where I’ve had slightly better results? Instead of obsessively engineering the perfect prompt, I use a two-step img2img approach: I run a base portrait or pose through the model without BE stuff, then slap it into Photoshop or even MS Paint (don’t judge me) and super crudely paint around the chest—literally just lumpy shapes. Feed THAT through img2img with “breast expansion,” “ripping shirt,” “growing,” etc., and don’t crank the denoise over 0.4 or you get mutants. The AI seems to understand “growth” way more if you visually nudge it, rather than just words.
Also: skip realism if possible. Most BE AI art that gets anywhere near decent lands in anime/cartoonish territory. Realistic models panic and give you wonky anatomy or just delete clothing altogether.
Everyone talks prompts, but settings matter too. Euler A or DPM++ for sampling, 30-50 steps max, use a consistent seed if you want to experiment—otherwise you’ll go down a rabbit hole chasing one fluke result you can’t reproduce.
And ngl, don’t expect magic. You’ll probably have to combine what the AI spits out and fix stuff up old-school style (clip studio, photoshop, whatever). AI is just an assistant tool, it’s not replacing human BE artists anytime soon.
Oh, and watch out—the big public Discords and image boards usually don’t allow this stuff, so you gotta lurk or share privately. Or risk getting booted for, and I quote, “unwholesome balloonery.” Internet’s a wild place.
TLDR: Don’t rely on just prompts, try painting or morphing things for img2img, keep it stylized, and embrace the chaos. If you want a real BE “story sequence,” you’ll probably have to assemble it panel-by-panel and do some IRL photochopping. It’s a messy hobby, and AI’s learning curve is just part of the fun (or torture, depending on how much patience you’ve got).
Let’s be real: AI for breast expansion art is straight-up hit-or-miss, but here’s an analytical breakdown so you don’t rage-quit after your fifth mutant result. There’s been a lot of talk from others about Stable Diffusion and niche models like “Anything V5.” True—the custom models make a difference, and one-up mainstream tools like DALL-E and Midjourney that treat “expansion” like a radioactive keyword, dumbing everything down or just censoring you mid-sentence. But is model choice everything? Short answer—no, but it’s your starting point.
Here’s where I’ll veer slightly left: forget relying solely on prompt stacking. Sure, descriptors and negative prompts are key, and tinkering with sampling methods (Euler A, DPM++, etc.) will change your luck. But let’s not ignore embedding and LoRA (Low-Rank Adaptation) add-ons trained by the community. These things inject style, shape cues, and even “transitional” growth effects nobody gets by just writing “giant, growing, surprised.” Usable LoRAs are floating out there, you just have to haul through sites like civitai, sometimes with way better results than even the best-stock NSFW models.
Now, regarding img2img: I see the value in clumsy overpainting as a cue, but you’re not strictly stuck there. For image series or transformations, try out ControlNet (pose, scribble, etc.) to keep body consistency from frame to frame—a pain otherwise. Don’t be afraid to chain steps: prep your subject, inpaint for key moments, use ControlNet for sequence, and then post-process. It’s a messy workflow, but more reliable than just spamming prompts. And yeah, sticking to anime/cartoony as suggested above is practical, since photorealistic models are downright allergic to nonstandard anatomy. Fact.
As for public sharing… you can thank inconsistent moderation for making this a headache—so everyone’s right, you need to find your niche Discord, subreddit, or private share.
If you want to up your results: look at things like ”” as a reference not just for end images but also for how prompt structures and workflow diagrams are shown—it breaks down steps visually, and clarity matters in this weird subgenre.
Pros for tools like Stable Diffusion: open source, customizable, big community, continual improvement.
Cons: setup can be techy, model/download bloat, results are clunky for ultra-specific kinks unless you stick to niche cartoon styles, and, yeah, sometimes the flesh abominations are inevitable.
Vs. competitors? Both earlier posters shared some spot-on workflow ideas (sequential img2img, stacking prompts, heavy inpainting), but I’ll double down: embrace community-trained LoRAs and ControlNet, experiment with visual cues, and treat AI models like unpredictable chaos generators requiring post-edit TLC.
It’s not magic, but it’s miles ahead of what we had just a year ago. Happy expanding.