I believe lighting plays a very important part in making a scene realistic when it comes to creating one artificially, like in 3D modelling. That is why I also think the lighting of these AI generated images is the prime source of what impresses people about these images since no matter how unrealistic or distorted the subject is, the lighting makes it look like a natural part of the background. This is clearly different from photos like from poorly Photoshopped ones where the subject feels deliberately inserted into the scene from a cutout.

I am interested to understand how LLMs understand the context of the lighting when creating images. Do they make use of samples which happen to have the exact same lighting positions or do they add the lighting as an overlay instead? Also, why is it that lighting doesn’t look convincing in some cases like having multiple subjects together etc.?

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 天前

    First, sometimes the lighting is terrible if you look. Like shadows going one way for some objects, another way for others.

    But generative AI is generally extrapolating from its training data. It gets lighting right (when it does) because it’s processed a giant number of images, and when you tell it you want a picture of a puppy on the beach at sunset, it’s got a million pictures of puppies, and a million pictures of things on the beach at sunset. It doesn’t know if it’s right or not, but it’s mimicking those things.