Red Letters Spelling out Fake
|

Linked: Game Of (Internet) Life: How Social Media Reacts To Questionable News

I thought this look at how the news about someone’s death spread, and then became questionable, is very interesting. The thing that jumped out at me, which I’ve talked about before, is the likelihood that by linking to an old article about the person’s life, suddenly the tweet became more legitimate in some eyes due to a link that they assumed to be sourced information about the death, but which wasn’t at all.

“First, most users quickly trusted the initial reports as the information filtered in. This is to be expected: research has shown that individuals tend to trust those in their social networks. And indeed, the mathematician whose tweet was the primary source, while not the closest person to the deceased, was in the same community. In other words, what he said had weight. Further, by linking an article in Scientific American, users may have made a connection between the news and the article, even when the tweet did specify that was not the case.”

It’s interesting to me because it goes to show how many people simply don’t read articles before sharing them. I had always assumed that most people took their cue from whatever the original tweet said at face value without reading the article, but here we seem to see an example of people not reading the tweet very carefully either.

This is a problem. When so many people claim to get their news from social media, how informed are they really when they don’t read articles that are linked, and how easy it may be to manipulate us with false information?

Alas, as the article talks a bit more about, why do we trust who we trust? And, for me, when do we start to make adjustments away from our first instinct being to trust our networks? How many times do I see something that is provably false coming from a contact before I no longer trust anything they say, or stop following?

I don’t think we’ve widely gotten there yet as a user base, but I also suspect that’s because too many are not looking at the information they share to determine whether it’s true or not, but I don’t think it will take long before it happens. Ultimately, it’s not Twitter or Facebook that will get false information out of our networks, it’s the social penalties that will come from sharing it in the first place.

Currently, those haven’t been all that awful, yet. We are generally too polite to point out others errors, and the algorithms don’t help us out in that regard. (Oh you commented on this fake news story, you must want more of that..)

Eventually though, it’s coming. People will just quit paying any attention to you. We don’t like being lied to.

Go check out the whole article, there’s much more about how people questioned it, and went to Wikipedia to confirm, only for Wikipedia to constantly update back and forth on it before it was truly confirmed, and more about how people interacted with the original source of the news. It’s pretty intriguing.

https://www.techdirt.com/articles/20200428/16132944398/game-internet-life-how-social-media-reacts-to-questionable-news.shtml

Similar Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.