I assume only taking an input prompt is bad.
1. What If I fine-tune a diffusion model?
2. Tweak tokenization + schedulers
3. Use control-net to get specific composition
4. Purposefully refine every element with noise masking
I assume only taking an input prompt is bad.
1. What If I fine-tune a diffusion model?
2. Tweak tokenization + schedulers
3. Use control-net to get specific composition
4. Purposefully refine every element with noise masking
Am I not allowed to sing using my own vocal cords as every base melody has been created? Are we all to be slaves to those that came before us? Or do you support a time-based Disney style system?
Am I not allowed to sing using my own vocal cords as every base melody has been created? Are we all to be slaves to those that came before us? Or do you support a time-based Disney style system?
So are you okay with using a computer to output AI art using a diffusion model so long as the prompt isn’t asking for something intentionally infringing like Ghibli style?
So are you okay with using a computer to output AI art using a diffusion model so long as the prompt isn’t asking for something intentionally infringing like Ghibli style?
I assume only taking an input prompt is bad.
1. What If I fine-tune a diffusion model?
2. Tweak tokenization + schedulers
3. Use control-net to get specific composition
4. Purposefully refine every element with noise masking
I assume only taking an input prompt is bad.
1. What If I fine-tune a diffusion model?
2. Tweak tokenization + schedulers
3. Use control-net to get specific composition
4. Purposefully refine every element with noise masking