random_nerd wrote:Are they planning to implement turing tests to distinguish bots from particularly annoying or repetitive people?
There is the crux of it. It becomes a game of cat-and-mouse between bot creators and platform owners. If you really want to get fancy, you get into AI/machine learning, and other fancy-pants stuff, to make a bot sufficiently random and seemingly authentic. Likely, the best the law could do is call for a 'best effort' to identify them.
There are also click farms out there staffed by real people, because sometimes it's cheaper to hire a bunch of low-wage workers than actually create the automation.
The Russian efforts appear to use a combination of the two.
Plenty of humans, along with plenty of retweets/posts/click generated by bots.
Here is an interesting story from CBC Radio about how Russian efforts made #releaseTheMemo skyrocket (it's a good listen). The original investigation was published on Politico here:
https://www.politico.com/magazine/story ... emo-216935.
On the other hand, if there are platforms that have huge amounts of data, extreme expertise at mining said data, and using algorithms to classify user behaviour, it's the likes of Facebook, Twitter, Google, and so-on. I think the reality is they didn't detect and stop the questionable behaviour leveraged by Russian efforts because they didn't see it coming. To them, it probably looked a lot like the usual account/message boosting that happens all the time.
There are probably lots of folks who wish that they had recognized the problem earlier, because this isn't the kind of media coverage these platforms want (to say the least). That's not just because it is showing that they unwittingly helped Russia influence public opinion in many countries, about many things, but also because it's laying bare how many organizations (mostly advertisers) use the platforms to manipulate people. We all know manipulation is the whole point of advertising, we just don't like to be reminded about how effective it is.
It's also not entirely as clear-cut as 'blame the bots!'.
http://www.cbc.ca/news/technology/twitt ... -1.4567691As for the role of automated internet programs, or bots, the researchers are quick to point out that their findings shouldn't be taken to mean bots don't matter, or don't have an effect.
Rather, "contrary to conventional wisdom," they write, bots accelerated the spread of both false news and true news — but did so at about the same rate.
"When you remove them from your analysis, the difference between the spread of false and true news still stands," said Soroush Vosoughi, who also co-authored the study. "So they can't be the sole reason as to why false information seems to be spreading so much faster."
The study was published in the March 9 issue of the scientific journal Science.
The idea there is people are far more likely to pass-on novel information. It's a lot easier to come-up with fake novel information than real novel information.
An interesting point which the article notes is it is the result of undirected research at MIT which is funded by Twitter. The researches benefit by having access to all of Twitter's raw data. Most researchers don't. But then,
there can be significant problems if the door for researchers is left wide open.
A quantum state of signature may or may not be here... you just ruined it.