This is very interesting.
Arguably, using an image generator to make a specific image like this is no different from doing the same thing in MS paint.
Although, I think that this circle isn't actually 50% of the image. If it were, the distance between the edge of the square and the edge of the circle would be around 1/2 of th3 circle's radius, and in this image it looks about equal. Making it only 20% of the square's area. (Unless I've got my maths wrong)
So, maybe it depends on the amount of control the prompter has on the result.
Right now, the generator outputs images similar to the prompt, but the result is very much controlled by the generator's algorithm.
If future developments put more control into the promoter's hands (composition, perspective, brush style, etc.) it could be considered the promoter's work the same way images generated by digital art software belong to the person making the inputs.
I would say the image would have to contain nothing but what's in the prompt, but I'm not sure that's a great way to qualify it.
I think it has more to do with how the generator works than how it's used or what it produces.
I've been thinking about this, and I think I've nailed down the distinction that I was trying to describe in this comment.
Image generators as they exist today analyse a prompt for key words, and generate an image that they identify with those key words. What they don't do is use the prompt as instructions to create the image.
For example, a while ago I was trying to get a very specific image of an archer firing an arrow towards the viewer in one-point perspective. The generator would keep making one-point perspective images with archers in them, but no matter how I adjusted the prompt it never got the image I was describing because it couldn't interpret the prompt as language, it only saw the key words.
I would say that if an image generator used the prompt as actual instructions of how to generate the image, then you could say that the image was created bu the prompter using the generator as a tool, but as is, the image is generated almost randomly, with the prompter setting limitations on the degree of randomness.
I think a good analogy of this is that if you used a preset in a video editor to colour grade the video, you can still say that you colour graded it. Even though you didn't select the individual settings, you gave the program a specific instruction and it produced a specific, repeatable result.
In this analogy, current image generators would be more like hiring someone else to do the colour grading, and telling them to "make it gloomier" even though you did affect the result, its mostly dependant on the whims of the person actually doing the colour grading, or in this case, the random selection of the algorithm.
There is a middle ground in this analogy, where a director is sitting over the editor's shoulder telling them to bump the exposure and decrease the temperature, but at a certain point that flips over to the director doing the colour grading and the editor just pressing the buttons.
I don't know anything about computer science, but I think a truly instructable image generator is an entirely different beast to the generators we have now, but it could be possible soon, and I would argue that using a generator like that would be no different from using MS Paint to create an image. Except for being less impressive.