It seems like any kind of mass-comment system would be easy to detect. Simply limiting the post rate and comparing to other posts would work just like current anti-span filters now.
Limiting what post rate? If it's coming from a farm it could have the same distribution as real people. If you read the article, you'll see that it's not about spam. It's about astroturf, and not even the kind where you just get a people to repeat your talking points. There are more subtle ways of diverting attention toward a legitimate message you find favorable, or away from a message you'd rather have people forget. That's all much harder for an automated system to detect, and the karma-whoring part harder still.
Look at some of the other comments on this very thread. Not the top vote-getters, but the ones that must have gotten two or three upvotes apiece. Several of those could easily have been generated by an AI designed to rephrase an already-popular view, perhaps with a pop culture reference or two thrown in to make it seem more authentic. Voila, instant karma, which can then be used in the ways the original presentation suggested to influence who reads what.
It only seems easy until you spend five minutes thinking about it.
It is the same problem as credit card fraud detection on an ecommerce site. Naive stuff is easy, as the fraud gets more sophisticated it is indistinguishable from human traffic. It isn't just that bots will post stories, but the bots are controlled by persona-amplification software, so a single person could control 100+ bots, tweaking and modifying their behavior while keeping a 2k view of the conversation.
I read a fairly compelling Doctorow short (possibly I, Rowboat?) which posited that that origin of AIs was spam filtration. By creating an ongoing cat-and-mouse game between spambots and anti-spambots, a selective pressure was accidentally created from which intelligent language-using agents emerged.
This is if all the comments came from the same IP and if the puppeteer was stupid enough to post all the comments at once.
Additionally if the AI can beat the turing test then comparing the posts would not be practical. Passing the turing test signifies that it is not possible to distinguish the comments from human comments by looking at the comments alone.
You don't need to detect a machine, astroturfing now is mostly a manual process.
Recognizing the same topics repeated is enough, even if it happens over time. The longer the time between posts, the less effective the campaign. The less cohesive the topic, the less effective the campaign. This already exists somewhat, machine or human is almost irrelevant.
I had an anti-Obamacare Republican canvasser show up at my door during election season, and ended up debating for about an hour on the front porch. Though it was an enjoyable and intelligent discussion (far from the stereotype of the Tea Party), I couldn't help but realize that I effected a 1-to-1 "time attack" that drained his ability to canvass voters who actually had a snowball's chance of voting for his candidate.
For better or worse, that seems to be an attack vector in a free marketplace of ideas: finding ways to burn the time and energy of your opponent. (See also: "outrage fatigue".)
This is one of the major goals of trolling -- trying to get someone to waste a lot of time trying to convince someone who isn't actually interested goes side by side with trying to get someone to waste a lot of emotion on someone who doesn't actually care. Part of what makes it so effective is that it can be hard to distinguish from genuine concern (ie, you found it an enjoyable and intelligent discussion, but a troll might act the same as you did purely to waste the canvasser's time and not actually care about the topic at hand.)
One of my rules is that, if someone seems to be trying to get me to invest considerably more time or energy than they're investing, I'll only engage if I find the process of researching/writing about a given topic valuable in and of itself.
These days I usually write about things I want to understand (through writing), not things I already understood. I like to thing this approach is immune to trolling.