I'm teaching a class on technology ethics from a future perspective next week and want students to discuss real-life cases of tech and ethical questions in small groups. Does anyone have good suggestions for cases, especially beyond obvious ones like Cambridge Analytica?
@mmin China's new Social Credit System would poise a good one, especially in it's... mixed / unclear usages, and how they may reflect / reinforce bias which trap people from social advancement.
Some amount of scholarship on it, but Planet Money has a good lay approach on it https://www.npr.org/sections/money/2018/10/26/661163105/episode-871-blacklisted-in-china
Thanks! That's a really good one. One of the student groups is already doing a bigger group work on it though, so maybe it would be overlapping. The article looks interesting!
Deplatforming on privately run social networks as well as "shadow banning." I'm sure you can find several examples across the political spectrum.
Blacklisting of cryptocurrency coins just because it was previously used (typically several transactions/owners ago) in some allegedly questionable transaction.
@mmin @alcinnz maybe the connected gadgets used in investigations/court things? https://www.cnn.com/2017/03/07/tech/amazon-echo-alexa-bentonville-arkansas-murder-case/index.html or those gadgets calling help “on their own” https://www.independent.co.uk/life-style/gadgets-and-tech/news/man-beat-girlfriend-murder-threat-alexa-gadget-call-police-google-home-bernalillo-county-sheriff-new-a7835366.html
Here is a site listing companies that have gotten their data stolen. If you look into some interesting cases there, you might find material for your class :)
@mmin incident with Google+, because of which data was exposed which wasn't meant to be
@mmin I remember reading about Google getting access to some UK NHS (medical records) data & then abusing the access to such a degree, e.g. copying the data & accessing far more than was agreed, that the NHS shut them off. Can't remember where though :(
Also, Google collecting children's and students' data in a number of schools & a university California via Google docs even though they'd agreed not to.
There's too many bad things that Google have done to list them all :(((
@mmin Julian Assange has reported on many of Google's nefarious political activities too.
@mmin Probably a little late, but NFC implants and other medical implants are a good one.
On the one hand, most medical implants are life-saving and not given lightly.
On the other hand, there's growing concerns about security, and about device lifespan and privacy.
And with NFC implants, there's those concerns, plus legal and ethical questions around workplace use (including what happens to people with conditions that make those implants unsafe for them).
@mmin humane tech. Not a case study but relevant
Thanks! Looks interesting and is probably worth a mention.
@mmin I only really know the obvious ones but it would be really interesting to hear what the students make of the case when FB did it's secret psychological experiment on whether the mood of users could be affected by the types of posts they saw in their timelines (eg would showing users negative posts from others affect the 'mood' of the user's post and vice versa).
Thanks! That's a good case. We actually discussed that on the same course two years ago. The only issue is that it leads to a bit black and white discussion, just condemning FB rather than discussing a complex dilemma.
This year I decided to go with a generic list of questionable uses of AI, and children and information technology as the other theme. I'll try to remember to toot back how it goes 😊
@mmin Ahhh that's fair. It was clearly unethical, but the grey area comes from the cleanliness of the results. Without people knowing that they're being studied, you don't introduce any biases into subject behaviour and I guess the question would be if that lack of ethics is worth getting 'real' results, or maybe if it would have it been okay if FB used that data to improve the impact of its product on its users...but it makes sense to steer over to questionable uses of AI
Don't know if you have LinkedIn. Ellen Schuster from Deutsche Welle wrote a good article on that:
We are at community.humanetech.com and just started building out our presence, with more projects to come.
@mmin Thanks for mentioning Awful #AI ! I've added it to https://github.com/engagingspaces/awesome-humane-tech
Scholar Social is meant for: researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.
We strive to be a safe space for queer people and other minorities, recognizing that there can only be academic freedom where the existence and validity of interlocutors' identities is taken as axiomatic.
"A Mastodon profile you can be proud to put on the last slide of a presentation at a conference"
(Participation is, of course, optional)
Scholar Social features a monthly "official" journal club, in which we try to read and comment on a paper of interest.
Any user of Scholar Social can suggest an article by sending the DOI by direct message to @email@example.com and one will be chosen by random lottery on the last day of the month. We ask that you only submit articles that are from *outside* your own field of study to try to ensure that the papers we read are accessible and interesting to non-experts.