I agree that fact checking needs to be the standard in media coverage of political statements.
But I don’t think the AI tools that are currently available are ready to do it without significant human oversight. They are still prone to hallucinations and other unpredictable behavior now and then.
Yes, AIs are they are do need oversight. But it’s not possible to do this in real time without AIs. And corrections afterwards when AIs make mistakes is far better than just letting politicians get away with blatant lying. Also, as long as they’re supervised, any lines can be vetoed out if the supervisor things they may be off, leaving the corrections and source statements conservative since it’s obviously better to be silent than to be wrong for this sort of things.
And the earlier such projects start, the more we can learn to do it better as AIs get better, as well as recognize signs of the AI hallucinating.
I agree that fact checking needs to be the standard in media coverage of political statements.
But I don’t think the AI tools that are currently available are ready to do it without significant human oversight. They are still prone to hallucinations and other unpredictable behavior now and then.
Yes, AIs are they are do need oversight. But it’s not possible to do this in real time without AIs. And corrections afterwards when AIs make mistakes is far better than just letting politicians get away with blatant lying. Also, as long as they’re supervised, any lines can be vetoed out if the supervisor things they may be off, leaving the corrections and source statements conservative since it’s obviously better to be silent than to be wrong for this sort of things.
And the earlier such projects start, the more we can learn to do it better as AIs get better, as well as recognize signs of the AI hallucinating.