- Messages
- 403
- Reaction score
- 563
- Trophy Points
- 113
DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing
Paper description.
yujun-shi.github.io
I came across this paper on DragDiffusion today. It's an emerging technology out of Singapore that allows for detailed transformation of an image just by dragging what you want to move.
Among the examples given are a cat being made to look up and a statue being made to turn it's head. It's not clear at the moment how much control the editor has on how the edit comes out, and it's not currently designed for animation, but I think this could be the beginning of a technology that had a huge potential for creating edits that would traditionally be thought of as impossible, similar to how voice cloning has allowed for new dialogue to be added to movies, this could allow for new shots, or even new scenes, depending on how the technology develops over the next decade.