Need help creating an AI generated actress like Tilly Norwood

I’m trying to create an AI generated actress character similar to Tilly Norwood for a creative video project, but I’m struggling with where to start and what tools or settings to use. I’d really appreciate advice on the best AI image or video generators, prompt tips, and any legal or ethical issues I should be aware of so I don’t run into problems later.

Short version. You want:

  1. A base visual model
  2. A consistent face/style preset
  3. A voice
  4. A workflow to keep her consistent across shots

Here is a simple path that works for an “AI actress” like Tilly Norwood without ripping her likeness.

  1. Decide the look and boundaries
  • Make a quick moodboard: 5 to 10 images showing age range, vibe, fashion, hair, typical lighting.
  • Write a 1 page “character sheet”: name, age, background, personality, facial traits, style.
  • Be careful not to match Tilly’s face 1 to 1. Change at least 3 obvious features like hair color, jawline, nose shape, eye shape.
  1. Choose your tools
    Option A, easier, web tools
  • Images:
    • Midjourney, Leonardo, Ideogram, etc.
    • Use consistent prompts like “portrait of [character name], 24 year old actress, [style], [lighting]”.
  • Video with a stable face:
    • Pika Labs, Runway, Luma Dream Machine, Haiper AI.
    • Or use a face swap tool such as HeyGen, Krea video face swap, or similar.
  • Voice:
    • ElevenLabs or similar TTS. Pick a voice, tune pitch and speed.
      You upload reference images and then keep reusing the same “character” or “face” profile.

Option B, more control, local / desktop

  • Stable Diffusion with a model like RealVis, Juggernaut, or Dreamshaper.
  • Train a LoRA on 10 to 20 images of your custom girl. Not Tilly. Synthetic base only.
  • Use ComfyUI or Automatic1111 for prompts and batch generations.
  • For animation, use AnimateDiff, Deforum, or image to video tools.
  1. Build the face without copying Tilly
    Practical trick that works well:

Step 1

  • Generate 30 to 50 portraits using generic prompts like
    “cinematic portrait of a 20s woman, soft lighting, neutral makeup, 8k”.
  • Pick 5 to 10 faces you like that feel “Tilly-ish” in vibe, not in exact features.

Step 2

  • Train a face LoRA or an “identity” in a service that offers that.
  • Use those selected faces only.
  • Use low training steps. You want a flexible identity, not a clone.

Step 3

  • Now always prompt your character name plus a few key traits. Example
    “portrait of Ava Norlin, 24 year old actress, freckles, soft brown hair, calm expression, neutral makeup, studio lighting, 4k”.
  1. Keep her consistent across shots
  • Fix a few constants
    • Same eye color
    • Same hair style and length
    • Same aspect ratio, e.g. 9:16 or 16:9
    • Same seed when your tool allows it
  • Save a “master” prompt and negative prompt. Reuse them.
  • When you get a frame you like, save it as reference and use image to image to keep her close.
  1. Make her talk and act
  • For talking head style
    • Generate a clean front portrait.
    • Use a service like D-ID, HeyGen, or Pika “talking photo” type features.
    • Feed your TTS audio and let it lip sync.
  • For more movement
    • Use a body video of a stand in.
    • Use a face swap or “digital double” tool to replace the face with your AI actress.
  • Record your own voice track first. Then animate to match. Way easier for timing.
  1. Settings that help
    These are ballpark values, you tweak them per tool.

Images

  • CFG scale: 5 to 8
  • Steps: 20 to 30
  • Face restoration on
  • Use low LoRA weight at first, like 0.6 to 0.8

Video

  • Short clips 3 to 6 seconds per shot, then edit them together.
  • High denoise can break identity, so keep it moderate.
  • Hold on one “hero” look. Do not change hairstyle or makeup every shot if you want a stable actress.
  1. Workflow example for a short scene
  • Write a short script, under 1 page.
  • Record temp voice or use TTS.
  • Generate 5 key portraits of the actress in different angles and outfits.
  • For each line, create a short talking head or face swapped clip.
  • Edit in Premiere, Resolve, or CapCut. Add sound, color, and light VFX.
  1. Legal and ethical note
  • Avoid using Tilly’s real images as training data.
  • Avoid prompts like “in the style of Tilly Norwood’s face” or her real name.
  • Treat it like creating a new performer, not a replacement of a real one.

If you share what tools you already use, people can give more exact prompts and settings.

You’ve already got a pretty solid roadmap from @mike34. I’ll skip repeating that and come at it from a slightly different angle: how to keep this practical and not get lost in tool-hell.


1. Start with constraints, not tools

Before anything “AI actress-like-Tilly”:

  • What’s your output?
    • 30–60 sec TikTok style vertical?
    • 3–5 min short film?
    • Just talking-head monologues?

Your answer should decide:

  • Resolution: 1080x1920 or 1920x1080, pick one and never change.
  • Style: realistic or slightly stylized. Hyper-real tends to break more in motion.

If this is your first serious attempt, realistic + short talking-head clips is the least painful combo.


2. Use fewer tools, not more

I slightly disagree with the “many tools” approach. Every extra tool is one more place your character drifts off-model.

A very workable minimal stack:

  • One image generator
  • One video / talking-head tool
  • One voice tool
  • One video editor

Concrete example:

  • Images: Leonardo or Midjourney
  • Talking head: D-ID or HeyGen
  • Voice: ElevenLabs
  • Edit: CapCut or DaVinci Resolve

Lock that in. Don’t keep “trying everything.” Consistency > chasing the “best” model.


3. Design the difference from Tilly first

People mess this part up and then wonder why everything feels uncanny.

Instead of “like Tilly,” write:

  • 5 traits you want from Tilly’s vibe

    • e.g. “softspoken, calm eyes, modern but not flashy, natural makeup, subtle expressions”
  • 5 traits that are deliberately different

    • Different hair color & style
    • Different face shape (rounder / sharper)
    • Different nose / lips ratio
    • Slightly older or younger
    • Different fashion era (e.g. more 90s, or more streetwear)

Keep that list open while you prompt. If a result looks too close to Tilly, nuke that image and adjust.


4. Simple prompting template you can re-use

You can keep it super basic:

“portrait of [CHARACTER NAME], [age] year old actress, [hair type/color], [one or two strong facial traits], [emotion], [lighting], ultra detailed, cinematic”

Examples:

  • “portrait of Mara Keane, 25 year old actress, wavy chestnut hair, slightly prominent nose, gentle smile, studio softbox lighting, cinematic, 4k”
  • “medium shot of Mara Keane, 25 year old actress, wavy chestnut hair in low ponytail, focused expression, natural daylight, shallow depth of field”

Copy that into a text file and just tweak emotion, angle, wardrobe. That alone will keep her more consistent than reinventing the prompt every time.


5. Reference-driven workflow

Instead of instantly going into video:

  1. Generate:
    • 1 “hero” close-up portrait (front)
    • 1 three-quarter view
    • 1 side-ish angle
    • 1 medium shot (waist up)
  2. Decide: These four are “canon.”
  3. Every time you make a new image:
    • Use image-to-image (if available) using one of these canon shots.
    • Keep similarity / strength around 40–60 percent so she doesn’t melt into something else.

This is how you fake a “virtual actress headshot set.”


6. Voice first, visuals second

Big difference from the usual flow: do the audio first.

  • Write your short script.
  • Generate the full voice track in ElevenLabs (or record your own, then clone style later).
  • Cut the audio to final timing in your editor.
  • Then:
    • For each line or chunk, generate a separate talking-head segment in D-ID / HeyGen with your chosen portrait.

This avoids the nightmare of trying to match visuals to random clip lengths.


7. Keeping her “on-model” in motion

Tips nobody tells you until it’s too late:

  • Avoid big head turns for now
    Side profiles are where identity falls apart.
  • Keep hair relatively simple
    No crazy curls or wind-blown hair if you want consistency.
  • Use similar background tones each time
    Sudden bright red wall in one shot and pale gray in another makes the character feel different even if the face is technically the same.
  • Keep clips short
    2–4 seconds each, edit them together so little glitches pass fast.

8. Settings mindset, not magic numbers

Instead of chasing the “perfect” CFG scale or steps:

  • When images start drifting away from your actress:
    • Increase “strength” of your identity (LoRA weight or image strength) slightly.
  • When she looks too stiff and clone-ish:
    • Decrease that strength and add more descriptive facial traits in the prompt.

Treat settings as a “personality slider” rather than numbers you have to copy from someone else.


9. Quick starter recipe

If you want something concrete you can try this week:

  1. Pick: Leonardo + D-ID + ElevenLabs + CapCut.
  2. Design character differences from Tilly.
  3. Generate 4 “canon” portraits.
  4. Record or TTS a 30-second monologue.
  5. Cut audio to final.
  6. Make 5 or 6 short talking-head clips with D-ID using your best portrait.
  7. Edit together, add simple color grade and some ambient background audio.

That gives you a working “AI actress” prototype. Once that feels solid, then you can start getting fancy with face swaps, local SD, LoRAs, etc.

If you share:

  • whether you’re on desktop or just phone
  • what budget you’re ok with monthly
  • if you care more about realism or stylized

…you can get way more precise advice than “use X and Y settings.” Right now, keep it tight and get a v1 done instead of assembling a NASA stack of tools that never ships.

Short version: don’t start with tools, start with “who is this actress” and “how repeatable is she.”

@ mike34 already nailed a strong roadmap. I’ll push on slightly different points and disagree in a couple spots.


1. Nail the character bible first

Before prompts, platforms, or settings, write a 1‑page “AI actress sheet”:

Define:

  • Name & age range
  • Origin / background (urban / rural, country, cultural vibe)
  • Acting archetype (indie drama, romcom, sci‑fi, art‑house, etc.)
  • Fashion baseline (streetwear, minimalist, 90s, cottagecore, etc.)
  • Emotional range (soft / intense / quirky / stoic)

Then add:

Visual anchors:

  • Hair: color, length, texture, parting, 1–2 default styles
  • Face:
    • Head shape
    • Nose: small / straight / aquiline, etc.
    • Lips: thin / full / cupid’s bow
    • Brows: straight / arched / thick / thin
  • Makeup level: “barely there,” “soft glam,” “editorial,” etc.

Acting style anchors:

  • How fast she speaks
  • How much she smiles vs stays neutral
  • How she uses pauses and emphasis

This becomes your “director’s cheat sheet.” Every decision later (prompt wording, shot type, performance) should align with this, not with “Tilly Norwood clone.”


2. How close to Tilly is too close?

This is where I’ll disagree slightly with the “just change a few traits” approach.

If you are inspired by Tilly’s vibe, keep these rules:

  • No explicit “Tilly Norwood” mentions in prompts
  • Change at least 3 of:
    • Hair color & style
    • Face shape
    • Eye shape / color
    • Age bracket
    • Fashion & makeup style

If you can put your AI next to Tilly and they look like twins or “sisters,” you have gone too far. Aim for “belongs in the same cinematic universe,” not “AI twin.”


3. Pick your pipeline philosophy

Two main approaches here:

A. “Synthetic person” actress

You generate a completely original face and keep using that same identity everywhere.

Pros:

  • Cleaner ethically.
  • Easier to brand as your unique character.
  • Less risk of weird comparisons.

Cons:

  • Getting perfect consistency can be harder if you use many tools.

B. “You as base” actress

You record yourself or a friend, then use face‑enhance / light AI stylization to push toward the vibe you want.

Pros:

  • Natural motion and expressions baked in.
  • Mouth sync and eye lines are automatically correct.
  • Editing feels like working with real footage.

Cons:

  • More work at the capture stage.
  • You need at least passable camera and lighting.

If Tilly’s acting presence is what inspires you, option B often lands better than just slapping animation on a still portrait.


4. Practical capture rules if you film a base

If you go with a real person + AI enhancement route:

  • Use one camera angle for the whole first project
    Think simple 3/4 talking head, chest‑up.
  • Lock your settings:
    • Resolution: 1080p
    • 24 or 30 fps and never change mid‑project.
  • Simple lighting:
    • One soft key light 45 degrees off to one side.
    • Neutral background (grey / beige / muted colors).

You can then add AI face enhancement, gentle stylization, or even lip‑sync overlays without the footage breaking.


5. Performance first, face later

Here I strongly agree with the “audio first” advice, but I’d go one step further:

  1. Lock the script.
    Time the beats and emotional turns.
  2. Record scratch audio.
    Either your voice or TTS, but commit to timing.
  3. Act to that audio.
    Have your base actress perform while listening on earbuds.
  4. Then apply:
    • AI voice
    • AI face refinement
    • Minor visual tweaks

Treat AI like digital makeup, not the actor itself. You get livelier, less robotic performances.


6. On tool choice & “tool hell”

You already have solid suggestions from others, so instead of more brand names, I’d focus on roles:

  • One “identity keeper” (the thing that defines how she looks)
    • Could be a custom model / LoRA / saved seed + prompt combo.
  • One “motion” tool (for lip‑sync / head moves)
  • One “voice” tool
  • One “editor”

Where I’d gently push back: sometimes using two image tools is helpful if you are strict about canon.

Example workflow:

  1. Use Tool A to generate your primary 4 “canon” stills.
  2. Use Tool B only in image‑to‑image mode, feeding those canon stills, for shot variety.

You never let Tool B invent her from scratch, it only “films” the actress defined by Tool A.


7. Keeping a consistent “AI actress identity”

A few advanced but still accessible tricks:

  • Seed discipline
    If your image tool allows seeds, pick 1–3 “core seeds” that look closest to your character sheet and reuse them constantly.
  • Wardrobe palettes
    Pre‑decide 2–3 color palettes for her outfits (for example: beige / white / denim or black / olive / rust). You can change the clothing type without changing the palette to keep continuity.
  • Pose library
    Make a mini reference board of 10–15 poses and expressions (thoughtful, amused, concerned, playful, etc.). Repeat these instead of improvising each shot. It gives the feeling of a real actress with recognizable micro‑behaviors.

8. Story & framing so she feels like a character

You’re not just showing an AI face. Give your Tilly‑inspired actress a context:

  • Where is she speaking from?
    • Tiny apartment, polished studio, backstage dressing room.
  • Why is she talking?
    • Audition tape, diary vlog, in‑character monologue, interview.
  • Who is she talking to?
    • Viewer directly, unseen interviewer, her future self.

Those choices affect framing, lighting, performance and how “real” she feels.


9. Quick sanity checklist for every shot

Before you render a final clip, ask:

  1. Does she still look like the same person as in my canon 4 images?
  2. Are hair and makeup aligned with the character sheet?
  3. Is the emotional tone matching the line of dialogue?
  4. Is the background consistent enough with other shots that it doesn’t feel like a different production?

If any one of those fails, regenerate before you commit to the edit.


10. Where you can diverge from @ mike34’s flow

Their roadmap is super practical. Places you might choose differently:

  • You can start with a slightly stylized look instead of pure realistic.
    Stylized often survives AI artifacts better and can hide lip‑sync oddities.
  • You don’t have to lock to talking‑head forever.
    Once you have a stable identity, try simple over‑the‑shoulder shots or cutaways using the same “actress” in image form, so it feels like a real film, not just a face.

If you share:

  • Whether you’re ok recording any live footage
  • How long your ideal video is
  • Whether you lean more toward “Tilly’s gentle realism” or “slightly cinematic / stylized”

people can help you pick a specific tool stack and tune the pipeline without reinventing everything.