Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue #20 still not working. #24

Open
htoyryla opened this issue Feb 19, 2022 · 0 comments
Open

Issue #20 still not working. #24

htoyryla opened this issue Feb 19, 2022 · 0 comments

Comments

@htoyryla
Copy link

htoyryla commented Feb 19, 2022

Still does not work. See the context in the original issue.

ResizeRight is expecting either a numpy array or a torch tensor, now it gets a PIL image which does not have shape attribute.

pil_img = Image.open(script_util.fetch(image)).convert('RGB')
smallest_side = min(diffusion_size, *pil_img.size)
pil_img = resize_right.resize(pil_img, out_shape=[smallest_side],
interp_method=lanczos3, support_sz=None,
antialiasing=True, by_convs=False, scale_tolerance=None)
batch = make_cutouts(tvf.to_tensor(pil_img).unsqueeze(0).to(device))

This is what I tried and at least it runs without an error

   t_img = tvf.to_tensor(pil_img)
   t_img = resize_right.resize(t_img, out_shape=(smallest_side, smallest_side),
                                 interp_method=lanczos3, support_sz=None,
                                 antialiasing=True, by_convs=False, scale_tolerance=None)
   batch = make_cutouts(t_img.unsqueeze(0).to(device)) 

I am not sure what was intended here as to the output shape. As it was, it made 1024x512 from 1024x1024 original, for image_size 512, now this makes 512x512.

I am not using offsets, BTW.

As to the images produced, can't see much happening when using image prompts, but I guess that is another story. According to my experience guidance by comparing CLIP encoded images is not very useful as such, so I'll probably go my own way to add other ways as to image based guidance. This might depend on the kind of images I work with and how. More visuality than semantics.

PS. I see now that the init image actually means using perceptual losses as guidance, rather than initialising something (like one can do with VQGAN latents for instance). So that's more like what I am after.

Originally posted by @htoyryla in #20 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant