latentblending/README.md

153 lines
6.6 KiB
Markdown
Raw Normal View History

2023-01-09 15:43:18 +00:00
# Quickstart
2023-01-10 09:12:44 +00:00
Latent blending enables video transitions with incredible smoothness between prompts, computed within seconds. Powered by [stable diffusion 2.1](https://stability.ai/blog/stablediffusion2-1-release7-dec-2022), this method involves specific mixing of intermediate latent representations to create a seamless transition with users having the option to fully customize the transition and run high-resolution upscaling.
2023-01-09 08:13:33 +00:00
2023-01-14 11:42:27 +00:00
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1I77--5PS6C-sAskl9OggS1zR0HLKdq1M?usp=sharing)
2023-01-13 16:31:53 +00:00
2023-01-09 08:06:01 +00:00
```python
fp_ckpt = 'path_to_SD2.ckpt'
2023-01-09 08:08:51 +00:00
2023-01-12 09:06:26 +00:00
sdh = StableDiffusionHolder(fp_ckpt)
2023-01-09 08:06:01 +00:00
lb = LatentBlending(sdh)
2023-01-09 08:08:51 +00:00
2023-01-09 08:06:01 +00:00
lb.load_branching_profile(quality='medium', depth_strength=0.4)
lb.set_prompt1('photo of my first prompt1')
lb.set_prompt2('photo of my second prompt')
2023-01-09 08:08:51 +00:00
2023-01-09 08:06:01 +00:00
imgs_transition = lb.run_transition()
```
2023-01-09 08:08:51 +00:00
## Gradio UI
To run the UI on your local machine, run `gradio_ui.py`
2023-01-09 12:43:24 +00:00
You can find the [most relevant parameters here.](parameters.md)
2023-01-09 08:06:01 +00:00
## Example 1: Simple transition
2023-01-09 07:58:03 +00:00
![](example1.jpg)
2023-01-09 08:00:24 +00:00
To run a simple transition between two prompts, run `example1_standard.py`
2022-11-21 23:20:07 +00:00
2023-01-09 08:06:01 +00:00
## Example 2: Inpainting transition
2023-01-09 08:00:24 +00:00
![](example2.jpg)
To run a transition between two prompts where you want some part of the image to remain static, run `example2_inpaint.py`
2022-11-21 23:20:07 +00:00
2023-01-09 08:14:57 +00:00
## Example 3: Multi transition
2023-01-20 12:29:07 +00:00
To run multiple transition between K prompts, resulting in a stitched video, run `example3_multitrans.py`.
[View a longer example video here](https://vimeo.com/789052336/80dcb545b2
2022-11-21 23:20:07 +00:00
2023-01-09 09:59:00 +00:00
## Example 4: High-resolution with upscaling
![](example4.jpg)
2023-01-09 17:15:54 +00:00
You can run a high-res transition using the x4 upscaling model in a two-stage procedure, see `example4_upscaling.py`. [View as video here.](https://vimeo.com/787639426/f88dae2ea6)
2023-01-09 09:59:00 +00:00
2023-01-09 08:40:01 +00:00
# Customization
## Most relevant parameters
2023-01-09 08:50:15 +00:00
### Change the height/width
```python
lb.set_height(512)
lb.set_width(1024)
```
### Change guidance scale
```python
lb.set_guidance_scale(5.0)
```
### depth_strength / list_injection_strength
2023-01-09 12:42:02 +00:00
The strength of the diffusion iterations determines when the blending process will begin. A value close to zero results in more creative and intricate outcomes, while a value closer to one indicates a simpler alpha blending. However, low values may also bring about the introduction of additional objects and motion.
2023-01-09 08:40:01 +00:00
2023-01-09 14:16:56 +00:00
### quality
When selecting a preset, you can choose the following values for quality:
lowest, low, medium, high, ultra.
This affects both the num_inference_steps and how many diffusion images will be generated for the transition
2023-01-09 08:40:01 +00:00
## Set up the branching structure
There are three ways to change the branching structure.
2023-01-09 08:50:15 +00:00
### Presets
2023-01-09 08:40:01 +00:00
```python
2023-01-09 14:16:56 +00:00
quality = 'medium'
2023-01-09 08:40:01 +00:00
depth_strength = 0.5 # see above (Most relevant parameters)
2023-01-09 08:50:15 +00:00
2023-01-09 08:40:01 +00:00
lb.load_branching_profile(quality, depth_strength)
```
2022-12-02 11:42:09 +00:00
2023-01-09 14:16:56 +00:00
### Autosetup tree
2023-01-09 08:50:15 +00:00
```python
2023-01-09 10:53:14 +00:00
depth_strength = 0.5 # see above (Most relevant parameters)
2023-01-09 08:50:15 +00:00
num_inference_steps = 30 # the number of diffusion steps
2023-01-09 10:53:14 +00:00
nmb_branches_final = 20 # how many diffusion images will be generated for the transition
2023-01-09 08:50:15 +00:00
lb.autosetup_branching(num_inference_steps, list_nmb_branches, list_injection_strength)
```
2023-01-09 14:16:56 +00:00
### Manual specification
2023-01-09 08:50:15 +00:00
```python
num_inference_steps = 30 # the number of diffusion steps
2023-01-09 10:53:14 +00:00
list_nmb_branches = [2, 4, 8, 20]
list_injection_strength = [0.0, 0.3, 0.5, 0.9]
2023-01-09 08:50:15 +00:00
2023-01-09 10:53:14 +00:00
lb.setup_branching(num_inference_steps, list_nmb_branches, list_injection_strength=list_injection_strength)
2023-01-09 08:50:15 +00:00
```
2022-12-02 11:42:09 +00:00
# Installation
#### Packages
```commandline
2022-12-02 12:08:17 +00:00
pip install -r requirements.txt
2022-12-02 11:42:09 +00:00
```
2022-12-02 12:08:17 +00:00
#### Download Models from Huggingface
2022-12-09 11:52:50 +00:00
[Download the Stable Diffusion v2-1_768 Model](https://huggingface.co/stabilityai/stable-diffusion-2-1)
2022-12-02 11:42:09 +00:00
2023-01-09 08:20:29 +00:00
[Download the Stable Diffusion Inpainting Model](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
2022-12-02 11:42:09 +00:00
2023-01-09 08:20:29 +00:00
[Download the Stable Diffusion x4 Upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)
2023-01-09 08:17:25 +00:00
#### (Optional but recommended) Install [Xformers](https://github.com/facebookresearch/xformers)
With xformers, stable diffusion will run faster with smaller memory inprint. Necessary for higher resolutions / upscaling model.
2022-12-02 11:42:09 +00:00
```commandline
2022-12-02 12:08:17 +00:00
conda install xformers -c xformers/label/dev
2022-12-02 11:42:09 +00:00
```
2022-12-02 12:08:17 +00:00
Alternatively, you can build it from source:
2022-12-02 11:42:09 +00:00
```commandline
2022-12-02 12:08:17 +00:00
# (Optional) Makes the build much faster
pip install ninja
# Set TORCH_CUDA_ARCH_LIST if running and building on different GPU types
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
# (this can take dozens of minutes)
2022-12-02 11:42:09 +00:00
```
2023-01-09 10:53:14 +00:00
# How does latent blending work?
2023-01-09 14:16:56 +00:00
## Method
2022-12-02 11:42:09 +00:00
![](animation.gif)
2022-11-21 23:20:07 +00:00
2023-01-09 10:53:14 +00:00
In the figure above, a diffusion tree is illustrated. The diffusion steps are represented on the y-axis, with temporal blending on the x-axis. The diffusion trajectory for the first prompt is the most left column, with the trajectory for the second prompt to the right. At the third iteration, three branches are created, followed by seven at iteration six and the final ten at iteration nine.
This example can be manually set up using the following code
```python
num_inference_steps = 10
list_nmb_branches = [2, 3, 7, 10]
list_injection_idx = [0, 3, 6, 9]
lb.setup_branching(num_inference_steps, list_nmb_branches, list_injection_idx=list_injection_idx)
```
Instead of specifying the absolute injection indices using list_injection_idx, we can also pass the list_injection_strength, which are independent of the total number of diffusion iterations (num_inference_steps).
```python
list_injection_strength = [0, 0.3, 0.6, 0.9]
lb.setup_branching(num_inference_steps, list_nmb_branches, list_injection_strength=list_injection_strength)
```
2023-01-09 14:16:56 +00:00
## Perceptual aspects
2023-01-09 11:09:25 +00:00
With latent blending, we can create transitions that appear to defy the laws of nature, yet appear completely natural and believable. The key is to surpress processing in our [dorsal visual stream](https://en.wikipedia.org/wiki/Two-streams_hypothesis#Dorsal_stream), which is achieved by avoiding motion in the transition. Without motion, our visual system has difficulties detecting the transition, leaving viewers with the illusion of a single, continuous image. However, when motion is introduced, the visual system can detect the transition and the viewer becomes aware of the transition, leading to a jarring effect. Therefore, best results will be achieved when optimizing the transition parameters, particularly the depth of the first injection.
2023-01-09 10:53:14 +00:00
2023-01-11 12:02:49 +00:00
# Coming soon...
2023-01-11 12:41:51 +00:00
- [ ] Huggingface / Colab Interface
- [ ] Interface for making longer videos with many prompts
2023-01-11 12:02:49 +00:00
- [ ] Transitions with Depth model
- [ ] Zooming
- [ ] Iso-perceptual spacing for branches (=better transitions)
Stay tuned on twitter: ```@j_stelzer```
Contact: ```stelzer@lunar-ring.ai``` (Johannes Stelzer)
2023-01-09 11:11:35 +00:00