I understand that, when we generate images, the prompt itself is first split into tokens, after which those tokens are used by the model to nudge the image generation in a certain direction. I have the impression that the model gets a higher impact of one token compared to another (although I don’t know if I can call it a weight). I mean internally, not as part of the prompt where we can also force a higher weight on a token.
Is it possible to know how much a certain token was ‘used’ in the generation? I could empirically deduce that by taking a generation, stick to the same prompt, seed, sampling method, etc. and remove words gradually to see what the impact is, but perhaps there is a way to just ask the model? Or adjust the python code a bit and retrieve it there?
I’d like to know which parts of my prompt hardly impact the image (or even at all).
Im pretty sure there is an extension that does something like that…
This is what i was thinking of: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Extensions#daam
Not sure if its still supported though, the git repo has been set read-only.