I'm teaching a class on technology ethics from a future perspective next week and want students to discuss real-life cases of tech and ethical questions in small groups. Does anyone have good suggestions for cases, especially beyond obvious ones like Cambridge Analytica?
@mmin China's new Social Credit System would poise a good one, especially in it's... mixed / unclear usages, and how they may reflect / reinforce bias which trap people from social advancement.
Some amount of scholarship on it, but Planet Money has a good lay approach on it https://www.npr.org/sections/money/2018/10/26/661163105/episode-871-blacklisted-in-china
Thanks! That's a really good one. One of the student groups is already doing a bigger group work on it though, so maybe it would be overlapping. The article looks interesting!
Deplatforming on privately run social networks as well as "shadow banning." I'm sure you can find several examples across the political spectrum.
Blacklisting of cryptocurrency coins just because it was previously used (typically several transactions/owners ago) in some allegedly questionable transaction.
@mmin @alcinnz maybe the connected gadgets used in investigations/court things? https://www.cnn.com/2017/03/07/tech/amazon-echo-alexa-bentonville-arkansas-murder-case/index.html or those gadgets calling help “on their own” https://www.independent.co.uk/life-style/gadgets-and-tech/news/man-beat-girlfriend-murder-threat-alexa-gadget-call-police-google-home-bernalillo-county-sheriff-new-a7835366.html
@mmin incident with Google+, because of which data was exposed which wasn't meant to be
@mmin Probably a little late, but NFC implants and other medical implants are a good one.
On the one hand, most medical implants are life-saving and not given lightly.
On the other hand, there's growing concerns about security, and about device lifespan and privacy.
And with NFC implants, there's those concerns, plus legal and ethical questions around workplace use (including what happens to people with conditions that make those implants unsafe for them).
@mmin humane tech. Not a case study but relevant
Thanks! Looks interesting and is probably worth a mention.
@mmin I only really know the obvious ones but it would be really interesting to hear what the students make of the case when FB did it's secret psychological experiment on whether the mood of users could be affected by the types of posts they saw in their timelines (eg would showing users negative posts from others affect the 'mood' of the user's post and vice versa).
Thanks! That's a good case. We actually discussed that on the same course two years ago. The only issue is that it leads to a bit black and white discussion, just condemning FB rather than discussing a complex dilemma.
This year I decided to go with a generic list of questionable uses of AI, and children and information technology as the other theme. I'll try to remember to toot back how it goes 😊
@mmin Ahhh that's fair. It was clearly unethical, but the grey area comes from the cleanliness of the results. Without people knowing that they're being studied, you don't introduce any biases into subject behaviour and I guess the question would be if that lack of ethics is worth getting 'real' results, or maybe if it would have it been okay if FB used that data to improve the impact of its product on its users...but it makes sense to steer over to questionable uses of AI
Don't know if you have LinkedIn. Ellen Schuster from Deutsche Welle wrote a good article on that:
We are at community.humanetech.com and just started building out our presence, with more projects to come.
@mmin Thanks for mentioning Awful #AI ! I've added it to https://github.com/engagingspaces/awesome-humane-tech
Here's are the slides for that class this year: https://my.owndrive.com/index.php/s/UrKeO4V0ggPQA2t
It's a very general intro to tech ethics from a future-oriented perspective. A lot of it is centred on how sociotechnical change relates to human agency. I discuss MIT's Moral Machine a bit and mention some technologies in passing, but it's not so much about specific technologies.
@mmin How about this coffee shop that won’t sell coffee to students for money, only for their data? https://www.npr.org/sections/thesalt/2018/09/29/643386327/no-cash-needed-at-this-cafe-students-pay-the-tab-with-their-personal-data
Yeah, I already had this lecture but the coffee shop case is really interesting. I actually mentioned it to students just yesterday at another session.
@mmin I find it very interesting too. Because it’s potentially just on the cusp of too problematic, but not everyone will worry too much about it.
@mmin Oh, just realized this post is a week old (just got it boosted from someone else). So probably my suggestion is too late! But it’s not so much a black-and-white case I think.
Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.
We strive to be a safe space for queer people and other minorities in academia, recognizing that there can only be academic freedom where the existence and validity of interlocutors' identities is taken as axiomatic.
"An academic microblog that you can be proud to put on the last slide of a presentation at a conference"
(Participation is, of course, optional)
Scholar Social features a monthly "official" journal club, in which we try to read and comment on a paper of interest.
Any user of Scholar Social can suggest an article by sending the DOI by direct message to @email@example.com and one will be chosen by random lottery on the last day of the month. We ask that you only submit articles that are from *outside* your own field of study to try to ensure that the papers we read are accessible and interesting to non-experts.