LLMs are trained to do one thing: produce statistically likely sequences of tokens given a certain context. This won’t do much even to poison the well, because we already have models that would be able to clean this up.
Far more damaging is the proliferation and repetition of false facts that appear on the surface to be genuine.
Consider the kinds of mistakes AI makes: it hallucinates probable sounding nonsense. That’s the kind of mistake you can lure an LLM into doing more of.
Our riding was projected 99% Conservative win but we went NDP. The riding specific forecasts are misleading, and I wonder how many important votes stayed home because they looked at the forecast and thought it was pointless.