Shared Links (weekly) Feb. 7 2021

Shared Links (weekly) Feb. 7 2021

For U.S. businesses, less data is more than ever

The Future Of Mental Health And Career Support For Remote Workers

No, Getting Rid Of Anonymity Will Not Fix Social Media; It Will Cause More Problems

I cannot stress this enough, getting rid of anonymity does nothing to stop harassing (look at Facebook?), and only hurts already marginalized people.

eDiscovery Tug of War: A Breakdown of the In-House vs. ALSP Debate, Part Two

Defensible Deletion: The Proof Is in the Planning

Microsoft launches Microsoft 365 for Legal

How to ensure mental wellbeing policies genuinely work for employees

The ethical quandary of being a social media manager in 2021

Strong stuff from Tim Cook

“What are the consequences of seeing thousands of users joining extremist groups and then perpetuating an algorithm that recommends even more?”

New ESI Sanctions Order Offers E-Discovery 101 Course for Lawyers

Linked: How to Use RSS Feeds to Boost Your Productivity
| |

Linked: How to Use RSS Feeds to Boost Your Productivity

RSS is not gone, quite the opposite. Most people, however, don’t use RSS subscriptions like they did in the old Google Reader days, but RSS is running underneath a whole lot of stuff that we all use every day.

But, I also want to point out that there are a TON of good reasons to use an RSS reader now. Maybe more than there were when Google still had one. As it is, we’ve sort of grown into this habit of letting social media inform us. If there’s some topic we want to know about, we’ll follow some accounts and let the algorithm decide for us what we need to see.

Look how well that’s working out.

Linked: Your Data Is Discriminating…Against You
|

Linked: Your Data Is Discriminating…Against You

I think this is really interesting, because the more information about you there is, the more reasons someone can find to either rule against you, or to do more investigation into you. Which leads to more information about you being available, and round and round we go. “That is in part because algorithms are made…

Over-Simplistic Scientific Intelligence
|

Over-Simplistic Scientific Intelligence

And this really brings me back to my point, that we do a poor job of truly understanding science, statistics and cause and effect. We believe that algorithms have all been well-thought out, and produce a “true” result, even when they are trying to predict something as unpredictable as what traffic will look like 20 years from now. We assume social science studies are giving us the “right” answer for how to educate people, or train them for the best outcomes, without considering what we are teaching them about the larger world. We assume that we can tweak one belief, or one thing, without human beings reacting to those changes in unpredictable ways, all the while thinking our one change will cause the reaction we DO predict.

We assume a lot that should never be assumed. We over-simplify a world that actually has more influences than we can possibly account for, and assume that what is really a small statistical difference represents a universal truth.

It doesn’t. There are no simple answers. It takes hard work, hard discussions, and lots of listening to figure out the best way forward. Don’t wait for AI to tell you what to do, it may be missing quite a bit.

Happiness Lab On How Grades And Rewards are Manipulative
|

Happiness Lab On How Grades And Rewards are Manipulative

One of the most popular arguments we hear, and one I’ve made myself, is that to truly stay informed, and avoid living in the bubble of our own political bias, we need to make sure we are getting information from a variety of sources, including ones we may not agree with.

This study seems to be telling us that isn’t enough, and it can easily be manipulated. If I read an opposing viewpoint, and there’s no reward for doing so, I’m unlikely to really be influenced by it, but if I read an opposing viewpoint and get rewarded for it, I’m more likely to change my mind.

Now, remember that emotional contagion we might get from social media? What if I shared one side of a political view, and got rewarded by the algorithms or whomever with lots of likes and comments, and the post got shared a whole bunch, but posts from the other side, got none of that? Which side am I more likely to agree with? Right, the one that I got better grades on. Not because it’s true, better, or more accurate, but because I am rewarded for thinking that way. Rewarded the way I’ve been my whole life, since I was a little boy, from the first time my parents wanted me to behave a certain way, all the way through my school years, and for all of my career.

How hard would that be to fight against? Almost impossible, I’d say. How easy would it be for social media to do it, either the companies themselves, or large groups of users?

How does that influence what we do see on social media?