image/svg+xml Follow

Hi fediverse!

I'm teaching a class on technology ethics from a future perspective next week and want students to discuss real-life cases of tech and ethical questions in small groups. Does anyone have good suggestions for cases, especially beyond obvious ones like Cambridge Analytica?

@mmin China's new Social Credit System would poise a good one, especially in it's... mixed / unclear usages, and how they may reflect / reinforce bias which trap people from social advancement.

Some amount of scholarship on it, but Planet Money has a good lay approach on it

Thanks! That's a really good one. One of the student groups is already doing a bigger group work on it though, so maybe it would be overlapping. The article looks interesting!


Deplatforming on privately run social networks as well as "shadow banning." I'm sure you can find several examples across the political spectrum.

Blacklisting of cryptocurrency coins just because it was previously used (typically several transactions/owners ago) in some allegedly questionable transaction.

Here is a site listing companies that have gotten their data stolen. If you look into some interesting cases there, you might find material for your class :)


@mmin incident with Google+, because of which data was exposed which wasn't meant to be

@mmin I remember reading about Google getting access to some UK NHS (medical records) data & then abusing the access to such a degree, e.g. copying the data & accessing far more than was agreed, that the NHS shut them off. Can't remember where though :(

Also, Google collecting children's and students' data in a number of schools & a university California via Google docs even though they'd agreed not to.

There's too many bad things that Google have done to list them all :(((

@mmin Julian Assange has reported on many of Google's nefarious political activities too.

Thanks everyone for the suggestions! I'll give this some thought today. Ideally I'd like to find cases that provoke thinking but are not too black and white, so it's not so easy to say what's ethically acceptable and what's not.

@mmin Probably a little late, but NFC implants and other medical implants are a good one.

On the one hand, most medical implants are life-saving and not given lightly.

On the other hand, there's growing concerns about security, and about device lifespan and privacy.

And with NFC implants, there's those concerns, plus legal and ethical questions around workplace use (including what happens to people with conditions that make those implants unsafe for them).

Thanks! Looks interesting and is probably worth a mention.


Moi! Nice to come across people in Finland here. I'm over in Turku (working at Finland Futures Research Centre) a couple of hours away from Helsinki. I travel to Helsinki often though for different events.

@mmin @dada Turku is a nice city. Ping me if you ever want to have a beer while in Helsinki. Always interested to meet people from the federated networks :)

@mmin I only really know the obvious ones but it would be really interesting to hear what the students make of the case when FB did it's secret psychological experiment on whether the mood of users could be affected by the types of posts they saw in their timelines (eg would showing users negative posts from others affect the 'mood' of the user's post and vice versa).

Thanks! That's a good case. We actually discussed that on the same course two years ago. The only issue is that it leads to a bit black and white discussion, just condemning FB rather than discussing a complex dilemma.

This year I decided to go with a generic list of questionable uses of AI, and children and information technology as the other theme. I'll try to remember to toot back how it goes 😊

@mmin Ahhh that's fair. It was clearly unethical, but the grey area comes from the cleanliness of the results. Without people knowing that they're being studied, you don't introduce any biases into subject behaviour and I guess the question would be if that lack of ethics is worth getting 'real' results, or maybe if it would have it been okay if FB used that data to improve the impact of its product on its users...but it makes sense to steer over to questionable uses of AI

@Matti Minkkinen @dada @Jason Robinson Ethics is a vast field, but let us say that if Ethics is about thinking and articulating how to live your life, and how to think about what right and wrong means, and how you should act in your life, for yourself, for those around you, your society, towards the world and everything in it, then the tools you use to express yourself through, like your voice and body or through what you communicate through digital publishing, can have a large impact. Hopefully, that impact is positive. Before means of mass communication, your reach was limited in space and time, today it can become global in just a few hours if you are unknown and in a few seconds if you are already known.

I think it makes sense in a student setting to ease them into ethics using dilemma stories that could be from their lives, and since you are making a connection to technology it makes sense to talk about how we communicate with those close to us and then move outward; from our immediate family, like parents and siblings, friends, boy-friend, girl-friend, spouse etc. Today's technology allows the removal of proximity - we do not have to be in the same place to talk to each other. It allows us to talk and see each other synchronously - we both have to be in a video call at the same time, but it also allows us to talk a-synchronously, e.g. voice messages or video messages. How does it affect how we interact when we are together? We can be clued in when we get together, but we also know a lot of people close to us zone out, and stare into their little slot machines for the next social media fix. This brings us to ethics. I think many people's social interaction is in need of a digital detox. There are specific Digital Judgement programs in schools to help both students, parents and teachers consider their ethical behaviour in the digital age, e.g. filming fellow students in gym shower and posting it online is NOT okay, and the IMPACT of transgressions are so much greater today. EVERYONE has seen it. What is the correct response today? We had bullies and people got bullied before the digital age. Can they be banned from owning a digital device or a Twitter account, are there even local laws to deal with them, let alone global laws?
Is banning them the correct response, or is the correct response to teach people to ignore bullies, to toughen up? "Everyone's nudes are online - who cares?" Such statements are heard, mostly by people who have not experienced the intense volume of attention of mass media. Many of us - perhaps most of us? - are not that easy going, even if we like to think we can be.

I think you will find that the massive privacy breaches are not possible to appreciate without starting small, and by making small count first. We are simply not good at big numbers. We can feel and cry for the lost privacy of one person, we struggle with crying for 80 000 000 people. We can get angry and harness energy to fight for better privacy, we can also become depressed - it does not matter anymore, what is the use, privacy is dead, right? Well, WRONG, though easy to believe. New laws are passed to allow legal action against these new global threats, e.g. the GDPR. Will privacy be breached after the GDPR - of course. Will it now be possible to make people pay - yes, and that is new. Will that make solution providers more careful - yes. Will those better tools have flaws - yes, and repeat improvement.

If we start with the woman and man in the mirror, teach our kids to do the same, then those are the ethics that will come through directly or indirectly in the tools we use and create to express ourselves; the Good, the Bad and the Ugly. Let us try to avoid the Bad and the Ugly, and keep the Good. Discussing ethics and technology can help with that, so good luck with your classes.

@mmin The need for ethical approaches to AI in News Media and #journalism where its use is already much more pervasive than most people realize.

Don't know if you have LinkedIn. Ellen Schuster from Deutsche Welle wrote a good article on that:



I saw @daibarnes sharing our website (thank you!), but we also have a community and #ethics is one of our important Humane Design tech topics for the near future.

We are at and just started building out our presence, with more projects to come.


@mmin Thanks for mentioning Awful #AI ! I've added it to

Sign in to participate in the conversation
Scholar Social

A Mastodon instance for academics

Scholar Social is meant for: researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.

We strive to be a safe space for queer people and other minorities, recognizing that there can only be academic freedom where the existence and validity of interlocutors' identities is taken as axiomatic.

"A Mastodon profile you can be proud to put on the last slide of a presentation at a conference"

"Official" monthly journal club!

(Participation is, of course, optional)

Scholar Social features a monthly "official" journal club, in which we try to read and comment on a paper of interest.

Any user of Scholar Social can suggest an article by sending the DOI by direct message to and one will be chosen by random lottery on the last day of the month. We ask that you only submit articles that are from *outside* your own field of study to try to ensure that the papers we read are accessible and interesting to non-experts.

Read more ...