Merge branch 'main' of github.com:lunarring/latentblending

This commit is contained in:
DGX 2024-01-10 08:47:44 +00:00
commit 1775c9a90a
2 changed files with 22 additions and 14 deletions

View File

@ -2,22 +2,32 @@
Latent blending enables video transitions with incredible smoothness between prompts, computed within seconds. Powered by [stable diffusion XL](https://stability.ai/stable-diffusion), this method involves specific mixing of intermediate latent representations to create a seamless transition with users having the option to fully customize the transition directly in high-resolution. The new version also supports SDXL Turbo, allowing to generate transitions faster than they are typically played back!
```python
from diffusers import AutoPipelineForText2Image
from latentblending.blending_engine import BlendingEngine
from latentblending.diffusers_holder import DiffusersHolder
pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16").to("cuda")
dh = DiffusersHolder(pipe)
lb = LatentBlending(dh)
lb.set_prompt1("photo of underwater landscape, fish, und the sea, incredible detail, high resolution")
lb.set_prompt2("rendering of an alien planet, strange plants, strange creatures, surreal")
lb.set_negative_prompt("blurry, ugly, pale")
be = BlendingEngine(dh)
be.set_prompt1("photo of underwater landscape, fish, und the sea, incredible detail, high resolution")
be.set_prompt2("rendering of an alien planet, strange plants, strange creatures, surreal")
be.set_negative_prompt("blurry, ugly, pale")
# Run latent blending
lb.run_transition()
be.run_transition()
# Save movie
lb.write_movie_transition('movie_example1.mp4', duration_transition=12)
be.write_movie_transition('movie_example1.mp4', duration_transition=12)
```
# Installation
```commandline
pip install git+https://github.com/lunarring/latentblending
```
## Gradio UI
Coming soon again :)
@ -90,12 +100,6 @@ lb.set_parental_crossfeed(crossfeed_power, crossfeed_range, crossfeed_decay)
```
# Installation
#### Packages
```commandline
pip install -r requirements.txt
```
# How does latent blending work?
## Method
![](animation.gif)

View File

@ -8,6 +8,7 @@ from PIL import Image
from latentblending.movie_util import MovieSaver
from typing import List, Optional
import lpips
import platform
from latentblending.utils import interpolate_spherical, interpolate_linear, add_frames_linear_interp, yml_load, yml_save
warnings.filterwarnings('ignore')
torch.backends.cudnn.benchmark = False
@ -64,7 +65,10 @@ class BlendingEngine():
self.multi_transition_img_first = None
self.multi_transition_img_last = None
self.dt_unet_step = 0
self.lpips = lpips.LPIPS(net='alex').cuda(self.device)
if platform.system() == "Darwin":
self.lpips = lpips.LPIPS(net='alex')
else:
self.lpips = lpips.LPIPS(net='alex').cuda(self.device)
self.set_prompt1("")
self.set_prompt2("")