Merge branch 'main' of github.com:lunarring/latentblending
This commit is contained in:
commit
18e05675c1
42
README.md
42
README.md
|
@ -1,6 +1,6 @@
|
||||||
# What is latent blending?
|
# What is latent blending?
|
||||||
|
|
||||||
Latent blending allows you to generate smooth video transitions between two prompts. It is based on (stable diffusion 2.0)[https://stability.ai/blog/stable-diffusion-v2-release] and remixes the latent reprensetation using spherical linear interpolations. This results in imperceptible transitions, where one image slowly turns into another one.
|
Latent blending allows you to generate smooth video transitions between two prompts. It is based on [stable diffusion 2.0](https://stability.ai/blog/stable-diffusion-v2-release) and remixes the latent reprensetation using spherical linear interpolations. This results in imperceptible transitions, where one image slowly turns into another one.
|
||||||
|
|
||||||
# Example 1: simple transition
|
# Example 1: simple transition
|
||||||
(mp4), code
|
(mp4), code
|
||||||
|
@ -17,40 +17,28 @@ Latent blending allows you to generate smooth video transitions between two prom
|
||||||
# Installation
|
# Installation
|
||||||
#### Packages
|
#### Packages
|
||||||
```commandline
|
```commandline
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
```
|
```
|
||||||
#### Models
|
#### Download Models from Huggingface
|
||||||
[Download the Stable Diffusion 2.0 Standard Model](https://huggingface.co/stabilityai/stable-diffusion-2)
|
[Download the Stable Diffusion 2.0 Standard Model](https://huggingface.co/stabilityai/stable-diffusion-2)
|
||||||
|
|
||||||
[Download the Stable Diffusion 2.0 Inpainting Model (optional)](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
|
[Download the Stable Diffusion 2.0 Inpainting Model (optional)](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
|
||||||
|
|
||||||
#### xformers efficient attention [(copied from stability)](https://github.com/Stability-AI/stablediffusion)
|
#### Install [Xformers](https://github.com/facebookresearch/xformers)
|
||||||
For more efficiency and speed on GPUs,
|
With xformers, stable diffusion 2 will run much faster. The recommended way of installation is via the supplied binaries (Linux).
|
||||||
we highly recommended installing the [xformers](https://github.com/facebookresearch/xformers)
|
|
||||||
library.
|
|
||||||
|
|
||||||
Tested on A100 with CUDA 11.4.
|
|
||||||
Installation needs a somewhat recent version of nvcc and gcc/g++, obtain those, e.g., via
|
|
||||||
```commandline
|
|
||||||
export CUDA_HOME=/usr/local/cuda-11.4
|
|
||||||
conda install -c nvidia/label/cuda-11.4.0 cuda-nvcc
|
|
||||||
conda install -c conda-forge gcc
|
|
||||||
conda install -c conda-forge gxx_linux-64=9.5.0
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, run the following (compiling takes up to 30 min).
|
|
||||||
|
|
||||||
```commandline
|
```commandline
|
||||||
cd ..
|
conda install xformers -c xformers/label/dev
|
||||||
git clone https://github.com/facebookresearch/xformers.git
|
```
|
||||||
cd xformers
|
|
||||||
git submodule update --init --recursive
|
Alternatively, you can build it from source:
|
||||||
pip install -r requirements.txt
|
```commandline
|
||||||
pip install -e .
|
# (Optional) Makes the build much faster
|
||||||
cd ../stable-diffusion
|
pip install ninja
|
||||||
|
# Set TORCH_CUDA_ARCH_LIST if running and building on different GPU types
|
||||||
|
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
|
||||||
|
# (this can take dozens of minutes)
|
||||||
```
|
```
|
||||||
Upon successful installation, the code will automatically default to [memory efficient attention](https://github.com/facebookresearch/xformers)
|
|
||||||
for the self- and cross-attention layers in the U-Net and autoencoder.
|
|
||||||
|
|
||||||
# How does it work
|
# How does it work
|
||||||
![](animation.gif)
|
![](animation.gif)
|
||||||
|
|
Loading…
Reference in New Issue