• Most new users don't bother reading our rules. Here's the one that is ignored almost immediately upon signup: DO NOT ASK FOR FANEDIT LINKS PUBLICLY. First, read the FAQ. Seriously. What you want is there. You can also send a message to the editor. If that doesn't work THEN post in the Trade & Request forum. Anywhere else and it will be deleted and an infraction will be issued.
  • If this is your first time here please read our FAQ and Rules pages. They have some useful information that will get us all off on the right foot, especially our Own the Source rule. If you do not understand any of these rules send a private message to one of our staff for further details.
  • Please read our Rules & Guidelines

    Read BEFORE posting Trades & Request

AI tools that you use (or want to use) for editing

INIGHTMARES

FE Academy Member
Donor
Faneditor
Messages
499
Reaction score
734
Trophy Points
118
The term AI is pretty general if you think about it but I am wondering about modern AI tools that are cropping up all the time like Eleven Labs or EbSynth.

I am working on an edit that I'd like to use an AI tool that could run a series of tasks on my PC. Tasks that are repetitive in nature and could be copied maybe even by a key-strop program.

Anyways, what about you and your edits?
 
Last edited:
I reguarly use Elevenlabs, just recently I decided to do a few audio-only shorts recreating the cartoons of my childhood, and am now confident enough to extend the work into dubbing over whole visual scenes that I can integrate into full episodes.
 
I'm in favour of tools to assist in the process, and use melody.ml whenever I need to do audio work such as music removal or dialogue isolation.
I'm not in favour of image or audio generation though, so I wouldn't use any dialogue generator or image creator.
I do need to admit that I did use one to generate 2 images for my Kang cover, but I've since replaced those images with real images. I was in a rush at the time and couldn't find the type of images I wanted. I'm not proud of it and I won't do it again.
I'm not going to say anyone is a bad person for using them, after all I did, but I don't want to support those generation things or approve of them.
I'd be interested in any tools that allow us to process the existing media in some way though. image stabilisation, possibly upscaling or frame insertion, at a push, I have used Topaz for special effects shots.
 
I do need to admit that I did use one to generate 2 images for my Kang cover, but I've since replaced those images with real images. I was in a rush at the time and couldn't find the type of images I wanted. I'm not proud of it and I won't do it again.
Why would that be an issue though? Using AI is possibly only slightly more problematic than the defacing of art that fanediting already is.
I'm not going to say anyone is a bad person for using them, after all I did, but I don't want to support those generation things or approve of them.
That's kind of what they are meant for though.
I'd be interested in any tools that allow us to process the existing media in some way though. image stabilisation, possibly upscaling or frame insertion, at a push, I have used Topaz for special effects shots.
Topaz is an AI upscaler. It essentially creates a new image from a reference and uses ai learning to add detail to the image. What you get at the end is the interpretation of an image by the ai in the software.
 
Why would that be an issue though? Using AI is possibly only slightly more problematic than the defacing of art that fanediting already is.
Are the many works by Andy Warhol 'defacing'? is a directors cut 'defacing'? Are all derivative works 'defacing'? I'm sorry I don't accept that. Fanediting is about celebrating and uplifting a movie. it's an evolution of art and I take great care with what I do and try to uphold and respect and celebrate what the artists did. Describing what I and many of us do as defacing feels rather insulting.

Topaz is an AI upscaler. It essentially creates a new image from a reference and uses ai learning to add detail to the image. What you get at the end is the interpretation of an image by the ai in the software.
well I could be persuaded that Topaz is just as bad... But I see Topaz as a lesser version of what image generators do. There are many algorithms that we can use. Topaz has an algorithm. it's been shown videos to train it on what to expect in certain circumstances. It's a complicated area to think about, but I guess I want to ask myself the question, is it possible for a human to manually upscale that video, or create inserted frames? I don't believe it is, so I don't consider topaz to be harmful, I see it as a tool. AI generation though, is literally taking humans out of the equation. It's a different circumstance.
 
Are the many works by Andy Warhol 'defacing'? is a directors cut 'defacing'? Are all derivative works 'defacing'? I'm sorry I don't accept that. Fanediting is about celebrating and uplifting a movie. it's an evolution of art and I take great care with what I do and try to uphold and respect and celebrate what the artists did. Describing what I and many of us do as defacing feels rather insulting.
Why ask if you are only open to a single viewpoint? Many would say AI is an evolution of art. Many would also say that derivative works are defacing the intent of the creator of said works. Derivatives by the creator aren't problematic as they are the source eof the IP and work. We aren't.
well I could be persuaded that Topaz is just as bad... But I see Topaz as a lesser version of what image generators do. There are many algorithms that we can use. Topaz has an algorithm. it's been shown videos to train it on what to expect in certain circumstances. It's a complicated area to think about, but I guess I want to ask myself the question, is it possible for a human to manually upscale that video, or create inserted frames? I don't believe it is, so I don't consider topaz to be harmful, I see it as a tool. AI generation though, is literally taking humans out of the equation. It's a different circumstance.
Lesser of the two evils still implies it's evil though. One could argue that all we are are organic computers with an ever adjusting algorithm that interprets sensations and then reinterprets them in similar or different outputs. Learning informs our algorithms and admits our productions.
 
I mean Andy Warhol is an interesting example to use. He was literally and figuratively defacing objects to reflect the emptiness and consumerism of modern society.
 
Why ask if you are only open to a single viewpoint? Many would say AI is an evolution of art. Many would also say that derivative works are defacing the intent of the creator of said works. Derivatives by the creator aren't problematic as they are the source eof the IP and work. We aren't.

Lesser of the two evils still implies it's evil though. One could argue that all we are are organic computers with an ever adjusting algorithm that interprets sensations and then reinterprets them in similar or different outputs. Learning informs our algorithms and admits our productions.
I'm open to learning, so perhaps I included a poor choice of words in there somewhere.
You say derivative works are defacing the intent. I don't see an edit of a copy of a work to be defacing the original work, but defacing the intent is a rather nebulous concept. Do we know the intent of the creator 100%? Furthermore, when viewing a movie as a commercial product, whose intent is in question? The director, the producers, the editor? Works such as these are highly collaborative and individual creators do not always have their voices truly reflected. I search for intent in the movies I work with. I look at what a director did in different sections. For example, my edit of thor love and thunder hangs entirely off my perception of Taika Waititi's intent for who Gorr is and Jane and Thors love for each other. I do not feel I am defacing Taika's intent, rather cutting away what I consider to be studio meddling.

Yes we are organic computers, I can get on board with that. We live a shared experience and we communicate that through art. Ai algorithms don't live in our society so I don't feel those algorithms have a voice that is valuable to other humans.
 
I'm open to learning, so perhaps I included a poor choice of words in there somewhere.
You say derivative works are defacing the intent. I don't see an edit of a copy of a work to be defacing the original work, but defacing the intent is a rather nebulous concept. Do we know the intent of the creator 100%? Furthermore, when viewing a movie as a commercial product, whose intent is in question? The director, the producers, the editor? Works such as these are highly collaborative and individual creators do not always have their voices truly reflected. I search for intent in the movies I work with. I look at what a director did in different sections. For example, my edit of thor love and thunder hangs entirely off my perception of Taika Waititi's intent for who Gorr is and Jane and Thors love for each other. I do not feel I am defacing Taika's intent, rather cutting away what I consider to be studio meddling.
I'm thinking from the point of view the IP creators.
Yes we are organic computers, I can get on board with that. We live a shared experience and we communicate that through art. Ai algorithms don't live in our society so I don't feel those algorithms have a voice that is valuable to other humans.
Do they not live in our society? We created them to exist in our society and to run aspects of the majority of seemingly redundant tasks within our society. Yes they don't live like organic beings do, but artificial intelligence is in itself suggesting their is intelligence forming and becoming. It's not organic life, but perhaps it's silicon life. That's the whole concept behind Blade Runner, The Matrix, Tron and Terminator just to name a few. Do I currently think AI is a form of life that requires protection similar to human rights. No. But perhaps I'll be proven wrong in the future.

My original response was not meant to disprove or discredit you, but rather to share that using AI as a faneditor might not be such an issue or thing of shame. I think many of us would hope you wouldn't end up beating yourself up for using a learning tool during the learning process that we all are in. The posters look great btw, and I think I'll actually start using similar approaches of ai instead of spending hours removing akemjthbg from a picture. Thanks for the indirect tip! :)

There is an interesting movie that analyzes the ethics of ai actors called S1m0ne. We seem to be a few steps away from that reality. I think as faneditors who derive, it makes sense to use the tools to reimagine. We also aren't impacting the larger culture as we are so niche as to not be known or of interest to the majority of media consumers. Creators however are responsible for the economics of the industry and so I think that's where the issue arrives about ethics. But, that's for another thread so I'll stop there :) the movie was a good watch, albeit a bit bizarre at times.
 
IMO, editing is a whole artform in itself, and as exercise all about cutting and patching source. The choices people make and how they are implemented are expressions of taste and problem solving; storytelling. Limitations are a huge part of the art in that way. I think I inherently disvalue AI in that sphere for being generative without real intent or human imagination. The whole process is too trial-error random to be imaginative in a meaningful way to me.

But to the point about the ethics of fan editing versus AI, I don't see the comparison. Remixing and sampling, interpolation - these are a backbone of music. And homage, tribute, scrapbooking, and fanfic have always been valid engagement with the art that has come before it. Greek myth was all about pitching a new story into existing contexts. Fan editing really only comes into ethical question w/r/t piracy (distribution of materials) but even that is just about law, which we all know doesn't represent morality or ethics. But as art, another editor can appreciate edits and approaches to the same material; even outside the fanedit context. How something is edited is a cornerstone of criticism, also the fun of a Director's Cut, and how collaboration in any post production team works at all.

How someone takes to their work being 'defaced' is more personal than it is about ethics. Tom Cruise being upset about HFR televisions doesn't reflect on the owners who don't notice. And there are many examples of the hobby from Soderbergh to Topher Grace within the "industry" (if you wanna bootlick Hollywood like that) anyway.

Where AI differs is that what is being "stolen" regarding, say, a voice or a face - is something personal. Not an artistic endeavor, and certainly not just a piece of media as capital. The entitlement to someone's being feels like a different ballgame than the desire to transform work with limited tools. When it gets as close to realistic as we can get it now, it feels less like an offshoot of scrapbooking and more something sinister. And not that I think there's ill intent by anyone on here or anyone who uses it - I've dabbled a bit myself when helping w/ others' edits - but I'd rather not be a part of its normalization. It doesn't interest me creatively
 
Last edited:
The term AI is pretty general
I'm not sure if this is entirely on topic, but I've been thinking about it a lot.
The term "AI" as it's been commonly used in the past few years is really just a marketing or branding word that doesn't have any real meaning.
Technically "Artificial Intelligence" means a machine that can perform tasks that usually require a human mind, but as soon as such a machine is created, those tasks no longer require a human mind, so the machines are no longer AI.
Before the invention of the calculator, all mathematical operations required a human mind. "Computer" used to be a job title that a person would hold. By this definition, a relay could be considered a form of AI.

So the term AI should only really be used for hypothetical future inventions, or to describe an area of study centred on making machines perform human tasks which they can't yet perform. But more recently the term AI has been used to describe pretty much any kind of algorithm with no real pattern to what programs are "AI" and what programs aren't.

I think this is a big part of what's lead to the irrational hatred of "AI". The term makes it sound like it's some big new invention when it's really just lots of small developments in increasingly complex algorithms that aren't intrinsically any different from the algorithms that find a route on Google Maps.

The other big problems people have with AI is specifically with generative AI, firstly in the case of image generation people say it isn't real art, which maybe it isn't, but who cares? The doodles I do on a napkin aren't real art but that doesn't mean I shouldn't be allowed to doodle. People complain that is just absorbs and regurgitates other people's work, but isn't that how everyone learns to create art? The actual "creative" part is in putting things together in new ways, which can still be done using AI as a tool (it's what everyone here uses AI for).
For voice generation people complain that it could be used for nefarious purposes, and by now we've all seen the adverts on YouTube where someone uses an AI voice to pretend to be Richard Hammond or some other celebrity advertising an investment scam, but this argument is kind of like saying "Cars can be used for drunk driving, so you couldn't use cars." the fact that a tool can be used immorally doesn't make the tool immoral. If it did, your own free will would be the most evil thing you own.
The other complaint people have about voice generation is that it's inherently immoral to use someone's voice without their express permission. But if that were true, simply doing an impression of a celebrity, or remixing their words in a YTP or similar work, would be immoral. You could say that you have to make it clear that it's not their real voice, but I think that depends on context. Does it really matter that Jason Mamoa and Amber Herd never said the couple of lines about going to Sicily that I put in my Snyderverse edit? I'd like to see them try to sue me over that.

And the final biggest complaint that people have against AI is that it's taking away people's jobs, which it is. But what technology doesn't take away people's jobs?
Thousands of 1-Hour-Photo employees lost their jobs because you (and everyone else) prefer the convenience of digital cameras.
Is using Chroma Key tools immoral because you didn't hire someone who trained all their life to be a rotoscope artist?
I don't want to come across as heartless. It's always a sad thing when people lose their jobs, but it's part of the progress of technology. The only way to prevent people's jobs being replaced is to ban the invention of new technologies altogether, and the Luddites already lost that battle 200 years ago.

There is actually one other complaint against AI image generation that comes specifically from artists.
It goes along the lines of "I trained for years to create this stuff, and now other people can do it easily! Why should I bother!"
I didn't include this in the list because it's not really an argument against AI, it's just people being petty. There's no more or less reason to learn to be an artist now then there was ten years ago. The fact that other people can create something has no bearing on whether (or how) you should want to create things. Even if you are one of the top 5% of artists in the entire world, there are still 400 million people who can create art as good or better than yours.
Nothing has changed on that front.

//rant over

Back to the original point of the thread. I can't wait for Elevenlabs to improve to the point where you can get reliably good takes on every generation.
I did some experiments with Retrieval-based Voise Conversion which uses a voice sample as a base and creates a copy with the generated voice. I tried to create a version of Bad Lip Reading's Twilight video using the actual voices of the actors, the idea being that I could use my own voice to create the lines exactly how I want them to sound, but I found that this method takes even longer than Elevenlabs and you need a very big sample of the generated voice for it to sound good. I'm sure in a few years Eleven labs will have RVC capability, or at least some more options to control emotion, timbre and rhythm.

A great tool I don't see talked about enough is Ebsynth. It's designed for forcing an art style onto a video (Making a video look like an animated watercolour, for example), but I've been using it mainly for special effects. I've found that the results really depend on how much effort you're willing to put into trouble shooting and refining the shots. I'll make some comparisons of things I've done with it and post them here.

As for pipe dream AIs, I've seen some promising development on tools for animating still images. Those could be vey useful for lip-synching AI generated dialogue if they become good enough.
 
Last edited:
Do they not live in our society? We created them to exist in our society and to run aspects of the majority of seemingly redundant tasks within our society. Yes they don't live like organic beings do, but artificial intelligence is in itself suggesting their is intelligence forming and becoming. It's not organic life, but perhaps it's silicon life. That's the whole concept behind Blade Runner, The Matrix, Tron and Terminator just to name a few. Do I currently think AI is a form of life that requires protection similar to human rights. No. But perhaps I'll be proven wrong in the future.
They don't live at all though. I realise that it's difficult to get our head around how these things work (I'm a programmer and it took me forever to figure it out), but these things do not think, they do not imagine. they are nothing more than complex filters. The training process moulds the algorithm, then that algorithm just does a mechanical set of tasks, input > output. They give the illusion of imagining a picture but there is no imagination. if you give the same input, you get the same output every time. We don't have the access to give the same input each time as we don't have access to the random seed that is added to our input, therefore it appears as if it's 'imagining' something different based on some inner process of decision making, but it's just a random seed. There is no decision making ability there. Maybe in time, there will be that kind of general intelligence but for now these things are just amalgamation tools that are essentially just ripping off billions of artists.

My original response was not meant to disprove or discredit you, but rather to share that using AI as a faneditor might not be such an issue or thing of shame. I think many of us would hope you wouldn't end up beating yourself up for using a learning tool during the learning process that we all are in. The posters look great btw, and I think I'll actually start using similar approaches of ai instead of spending hours removing akemjthbg from a picture. Thanks for the indirect tip!
Thank you, I appreciate that :)
My current Kang posters are using real imagery not AI, I don't know if you saw the previous version, it had an ai antman and an ai army. I changed to a real image of toy soldiers and a real image of antman.
 
btw, I am just doing experimental comic book adaptation of my own short story using bing image creator. Nice stuff. I can do english version as well.
Maybe I will do some AI generated posters for my edits too.
 
I'll make some comparisons of things I've done with it and post them here.

Here's an example of using Ebsynth to add a lightsaber strike in my Force Awakens edit:
The full finished scene is here:

Here's an example from 'The Hole in the Bottom of the World' where I changed a character's design using Ebsynth
These are really the best shots in the edit, and some of the others look pretty crummy, but that's mainly down to me not dedicating the time to make every shot look great.

I think the Star Wars shot, where I focused all of my energy on a few seconds shows the real potential of using Ebsynth, and even then that's with my very limited knowledge and experience.
 
I think those are some good examples. Any tool needs to be guided to get good results. At the end of the day you had a specific idea of what you wanted and you used a tool to do that specific thing that would have otherwise taken you many times longer.
 
I understand the ethics problem but I think for our hobby it's very minor compared to other common practices. For example, I would rather someone use generative AI to make their own fan edit cover than just finding existing fan art, slapping a title on it, (changing nothing else) and saying "I made this". Either way the artist is not credited, but if it's AI generated, at least it's got a slightly higher chance of being "transformative".
 
I recently tried to recruit a friend who's a writer for one of my forthcoming editing projects, he said he felt uneasy about the method and backed out. We're all cool, but it's a real shame it's a hot button to touch with some people (especially in the fan film community), I've nearly lost friends in certain discords whenever I post A.I imagery in an art channel or show off a video I've made with Elevenlab dubs.

I understand the ethics problem but I think for our hobby it's very minor compared to other common practices. For example, I would rather someone use generative AI to make their own fan edit cover than just finding existing fan art, slapping a title on it, (changing nothing else) and saying "I made this". Either way the artist is not credited, but if it's AI generated, at least it's got a slightly higher chance of being "transformative".
I've been using A.I art for a few fanfic novella covers lately.
 
Yeah, I've had that kind of experience once or twice too. Some people just automatically dismiss anything AI-related for some reason.

Anyway, my new avatar is AI creation based on description on how I look when doing mountain hiking. Pretty close to real thing :)
 
Last edited:
I've been using A.I art for a few fanfic novella covers lately.
Am amateur writer, had a novel almost published a while back but the agent dropped me without much warning. I do use AI generated imagery for some accompanying art that goes with my work, but for the most part it's never 'generate pic -> slap title -> publish', I always tweak it in some form through Photoshop, be it correcting image mistakes, but more commonly just generating individual elements and compositing them into something I deliberately made.

I publish a Halloween webnovel, my first 100 pages are out, and I used AI generation to make things like the cover (heavily edited) and the scene break images. But this is something I justify by telling myself it's a free project that anyone can access. If and when I get published, I'd be paying for covers by smaller artists.

Here's an example of using Ebsynth to add a lightsaber strike in my Force Awakens edit:
After taking a quick glance at the Ebsynth website, which I was unaware of before, the way it turns live action footage into moving water-color paintings is so cute. I could see a 'What Dreams May Come' edit utilizing this to great effect.
 
Back
Top Bottom