As we didn’t have a tutorial last week, I took the initiative to research some of the debate topics set for this weeks tutorial and the one I found most interesting was the Facebook debate:
- Whether Facebook should be experimenting with people’s moods without their consent.
- Whether Facebook, Google, Twitter, Apple and Amazon have too much power (or not).
This topic fascinated me because, like many my age, I’m quite the avid Facebook aficionado but I never really considered the experimental and curative aspect of my news feed and timeline.
According to Gillespie “shifts in emotion of those around us can shift our own moods, even online” (Gillespie 2014). I would say that this is strikingly true. I deleted my Facebook account a whole year during my HSC year; not simply because I was afraid that I would get easily distracted but mainly because I found how depressed and stressed I got when I would log on Facebook due to people’s posts gloating about how much work they have done or more blatantly, how depressed they are throughout HSC. I find this notion to be even more true when I consider the fact that every uni exam period since my first semester at UNSW, I would go on a Facebook detox to protect myself from added emotional stress during an already emotional period. So give this, should Facebook be experimenting with people’s mood without their consent? I would, for the most part, answer this in the negative.
Clearly the biggest ethical concern is the fact that the subjects of this experiment are humans who did not consent (Gillespie 2014). As a law student who has spent the last 2 weeks studying about consent in a Criminal and Contract Law perspective, this concern is something that I gravely agree with – for where no consent is sought, then it can’t be said that is morally acceptable, let alone ethical!
Yes the Facebook News Feed is already curated, based on a complex algorithm (Gillespie 2014). Does this bother me? Well no. Sure it presents some “deeper discomfort about an information environment where the content is ours but the selection is their” but I don’t think this is necessarily a bad thing (Gillespie 2014). When one considers the fact that those which appear on our newsfeed are those posts which based on statistics we would find interesting and the fact that we often do find it interesting vitiates the notion that Facebook has too much power.As Herrerra (2014) notes “the more popular a piece of content posted in your network becomes, the more likely it is to spill into your News Feed; the friends and Pages you interact most with are the ones you’ll see most frequently”. Personally, I wouldn’t care what someone that I met 3 years ago at camp said about their day but I would care and would like to see a photo that my closest friends post, and Facebook recognises this and caters to this. What I have a problem is, is not the power imparted on Facebook but when they transgress that power by conducting non-consented experimentations, especially those involving emotions.
Facebook is a platform “which we can’t stop feeding, and obsessively tracks our every online movement” – yes I would agree with that. Is it a plaform that needs to be taken down a notch in terms of the power imbalance created? I would say no. Sure there are ads on our news feeds and that gets irritating, but in a world where our media usuage has shifted to social media, how could you expect otherwise? Do I, however, condone the experimentations undertaken with its users as the subject matter? No, this is indisputably unethical.
Gillespie, T (2014) ‘Facebook’s algorithm — why our assumptions are wrong, and our concerns are right’,Culture Digitally, July 4, <http://culturedigitally.org/2014/07/facebooks-algorithm-why-our-assumptions-are-wrong-and-our-concerns-are-right/>
Herrerra, T (2014) ‘What Facebook doesn’t show you’, The Washington Post, August 18 <http://www.washingtonpost.com/news/the-intersect/wp/2014/08/18/what-facebook-doesnt-show-you/>