I've been using AI voice cloning a lot in my edits to change/add lines when I need to. I've worked out a pretty effective workflow for it.
First you have to comb through what your editing and make a superset of every line that the character you're cloning says, avoiding lines with other people talking over them.
Then use a program like Spleeter to isolate the voice from any music or sound fx, and comb through it again to remove any lines that are unclear or still have background noise. Clarity is super important here.
When you have your final voice samples, which should hopefully be close to 5mins long, you can upload them straight to the voice cloner (I use Elevenlabs AI), and from there you can type in the line you want it to read and tweak the settings until it sounds how you want it to.
The major downside of voice cloning right now is that it takes a lot of time to get a short line out of it. You'll have to generate a lot of 'takes' to get the line you want and you'll probably end up having to combine two or more takes for a single line in the end. Especially if you're trying to lipsynch it to the original footage. It's not a quick and easy process.
I've seen people suggest using AI for narration before, but that seems like it would be a nightmare to me. Unless you want the narration to be purposefully wooden, you'd probably have to generate it sentence-by-sentence and you'll need multiple takes for each sentence.
It's woth trying for the sake of advancing the craft, but you'd have to be super dedicated.