![]() ![]() ![]() (iii) We added two longer examples for Video Instruct-Pix2Pix. Improved Huggingface demo! (i) For text-to-video generation, any base model for stable diffusion and any dreambooth model hosted on huggingface can now be loaded! (ii) We improved the quality of Video Instruct-Pix2Pix.It will be further reduced in the upcoming releases. Minimum required GPU VRAM is currently 12 GB. Code for all our generation methods released! We added a new low-memory setup.Text, edge and dreambooth conditional video generation. ![]() The full version of our huggingface demo released! Now also included: text and pose conditional video generation, text and edge conditional video generation, and.The first version of our huggingface demo (containing zero-shot text-to-video generation and Video Instruct Pix2Pix) released!.Results are temporally consistent and follow closely the guidance and textual prompts. Our method Text2Video-Zero enables zero-shot video generation using (i) a textual prompt (see rows 1, 2), (ii) a prompt combined with guidance from poses or edges (see lower right), and (iii) Video Instruct-Pix2Pix, i.e., instruction-guided video editing (see lower left). Zhangyang Wang, Shant Navasardyan, Humphrey Shi Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators This repository is the official implementation of Text2Video-Zero. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |