“Choice architecture” is not always intended to leave us with much choice

As aspects of society are abstracted, modelled, and digitised, the structures which contain these abstractions are of great importance. This assignment will consider the structures that social media platforms can take, how the structures can be manipulated, and the inevitable feedback loop that results from the application of “nudge theory”[1] . From a UX design perspective, this concept can be referred to as “choice architecture”[2]. Whilst in some ways the architecture can be centred on accessibility, it is often used to “nudge” users towards choices that encourage the most engagement. This essay will argue that by optimising a single metric (engagement), choice architecture is often blindly pushing users towards actions that no one person can predict. In turn, this contradicts the initial purpose of “libertarian paternalism”[10] , which is the philosophy behind “nudge theory”[10] .

A common financial model of many online platforms is the advertising / recommendation model. This also happens to be the primary scenario in which nudge theory is applied. There are four main metrics which nudge theory considers[2]:

    – Attention Bias
    – Availability Bias
    – Gaze Cueing
    – Social Proof 

Attention Bias varies from individual to individual, where each person’s bias determines what they are likely to absorb, and what they are likely to filter out when presented with information or choices. Traditionally, marketing teams would have to compromise on the nuances amongst consumers’ biases, often choosing to target one demographic alone. Then, the capabilities of tracking and ‘Big Data’ reduced the need for compromise. Companies are now easily able to form aggregates of demographics; by determining varying likelihoods of successful marketing towards each demographic, and sending targeted advertisements simultaneously. An example of this is given later in the essay (Step 4 on pg.5 ).

Availability Bias requires less consideration for individual biases amongst consumers. Instead, it is a tactic in which the perceived availability of a product / event is distorted from its true availability. For example, “frequently reported news stories are often mistaken for being regular occurrences”, or, “commonly associated ideas begin to look like they are connected”[3] . This is a tactic which is used by various journalistic publishers, e-Commerce websites, and political campaigns. It can be enforced much more rapidly online, for example, a bot which scrapes the internet for every occurence of a violent crime committed by a specific demographic (e.g ‘criminal refugees’ or ‘knife crimes committed by black people’), and shares that type of content alone at a much higher frequency than a human would be able to.

Gaze Cueing / Visual Cueing is an essential feature of UX design, and can be incredibly helpful from an accessibility perspective, exemplified by the use of arrows and shading to add direction and sectioning to large chunks of text. As with most features though, it does also have the potential to be abused. The most obvious example is cited in a report [4] by the U.K Information Commissioner’s Office (ICO), which is the visually-implied preference of one choice over the other:

To a lot of people, this is a fairly obvious attempt at ‘nudging’ users to select the “Yes” option. The report by the ICO is aimed specifically at children, and argues that children do not have the critical thinking capacity to question such design choices, thus it is unethical to use nudge theory in this manner. Rightfully so. Yet this could be taken a step further, as even past childhood not everyone has the same quality of education, or learning capability. This example of Visual Cueing is often found on websites which use cookie trackers, and need to get your permission before doing so. Trackers are integral to recommendation algorithms, which will be explored later in the essay.

The final main metric that is considered within nudge theory is Social Proof. This metric is fragmented into sub-metrics, but for simplicity it is best summarised as a phenomenon where individuals copy other individuals’ behaviour. One digital manifestation of this, commonly found on Twitter, is the “ghost follower” trend [5] . This is where a business or ‘influencer’ pays someone to programmatically create hundreds, if not thousands of accounts, for the purpose of  following the paying business / influencer to improve their perceived popularity. The larger the following count, the greater the ‘fear of missing out’ effect other users will experience by not also following. To compare with a real-world example, “During the disco era, club owners would often allow lines to grow unnecessarily long outside their clubs. Although there was lots of room inside, the club owners knew that the line would attract more customers.” [5] . This is slightly more ethical than “ghost followers”, as the customers are real people with genuine interest.  The real-world visualisation of “ghost followers” would be a queue/crowd of scarecrows, which is far easier to determine as ‘fake’. Thousands of digital “ghost followers” are more time-consuming for humans to verify, although services such as botometer aim to tackle this problem.

When these metrics are integrated into algorithmic recommendation processes, they propagate content, products, and services, with engagement as the objective function to be maximised. The good-use / bad-use duality is also present in this case. For example, if five of your friends on Facebook are ‘interested’ in an event, then that event is likely to be recommended to you. Generally this is appreciated, as friends tend to have similar interests. Yet this same feature can be used to disseminate incredibly harmful content, and also to cause disagreements amongst users. As a counter-example, if one of your friends starts following a far-right and fascist Facebook group (maybe utilising “Availability Bias” in the way described previously), and a second friend publicly denounces the first for following such content, they will likely enter  a ‘keyboard war’. Almost like a tennis match of comments, one after the other stating their reasons for being on each ‘side’, and far more performative than an argument would be in real life. This type of interaction can be considered high-level engagement, as each user is repetitively quantising their stance, which in turn contributes to the attention economy[6]. As the thread of comments on a post grows, and the amount of time users spend on a post increases, an exponentially increasing number of people will be recommended that same post in order to maximise engagement. Whether good or bad, the algorithm does not care.


A meme which illustrates the ‘tennis match’ described above[19]

Furthermore, the results of algorithmically sourced recommendations are then assessed. Using an array of psycho-profiling metrics, and correlations found amongst other users, an individual’s personality traits can somewhat be determined from their interactions online. Michal Kosinski has researched heavily in this field. Research which was intended as a warning [7] , but was then repurposed by the likes of Cambridge Analytica[11] and other thinktanks whose aims were to sway voters politically. Kosinski’s methodology was as follows [7] :

1. Encourage a group of users to take a ‘personality quiz’ (easy enough, as many people find comfort in labelling themselves). Map the results of the quiz onto their interests (which are effortlessly gathered by tracking cookies) to form correlations which can then be extrapolated.

2. On a much larger scale, assess other users’ interests. Compare these interests with the correlations formed previously.

3. Place users into groups based on their interests, and thus their relative personality traits.

4. Send targeted adverts / content to each group. Compare results with a control group.

Step 1 was very easy to achieve. Personality quizzes have been integral to Facebook’s games section for at least a decade. The majority of personality quizzes are based on the (heavily researched and validated[8]) “OCEAN” score: Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism. The quizzes on Facebook were often guised under titles like “Which colour is YOUR personality???”[14]. Popularised even further by the likes of Myers-Briggs, users would take these personality quizzes and post their results on their Facebook profiles, thus encouraging the “Social Proof” metric of nudge theory. Facebook has enforced a policy which places personality quizzes under more scrutiny[9] , but they are not explicit in the type of scrutiny. Not to mention that once the personality <-> userbase correlations are already on the market, such as the “This Is Your Digital Life” application selling their data to Cambridge Analytica[15], it is very difficult to withdraw that information. It is possible to check whether you were involved in the leak or not[16], which is a step towards the right direction in providing transparency of social network structures and their capabilities. 

Steps 1 and 2 are the “Big Data” aspect. With advancements in deep learning models and algorithms, previously “unseen” correlations were brought to the surface. While these correlations can be good at recommending products or music to users, they are inherently biased as a result of being so. This bias is the counterpart to the “unseen” correlations within the models, which can hold great predictive power, but are never 100% accurate. Due to being “unseen”, and therefore unexplained, the algorithms operating on these models are often described as “black box” algorithms[12].

Step 3 is not always done by the social media platforms themselves, but often by 3rd party “audience analysis” companies, such as Lotame (website offline for several months)[13] and PeopleBrowsr[20]. The involvement that social media platforms have prior to this step is the choice architecture of algorithmically suggesting non-vetted applications, where users must “accept” the application in having access to personal details without being warned of the potential consequences of doing so. Either by capturing user data via non-vetted applications, or by directly asking users for their basic details such as email addresses, companies were able to use “Facebook Audience Insights”[17] to access additional details. This includes but is not limited to relationship status, page likes, frequency of activities, household income estimated via Facebook’s partners Acxiom[17], and predicted lifestyle estimated from Personicx by Acxiom[17][18].

Step 4 is exemplified especially clearly in Michal Kosinski’s research.
The results were similar when the researchers promoted a crossword puzzle app for smartphones with ads that targeted users based on their openness to new things.

People who had been identified as very open were urged to “unleash your creativity” on “an unlimited number” of puzzles. People identified as likely to cling to the familiar were told to “settle in with an all-time favorite.”

Those who saw the ad aimed at their particular level of openness were 30% more likely to download the game than those who didn’t.[10] .
From a research perspective this result is insightful, and a clear example of overcoming Attention Bias. When applied to the vast structures of social media networks, through deceptive apps which harvest this information to specifically sell to political thinktanks without user content, this result quickly becomes horrific. Overcoming Attention Biases amongst users is ethical when algorithmically recommending trivial content (i.e music events), but undoubtedly immoral in the case of political advertisements. Individuals should not be taken advantage of, or “nudged”, with respects to their morality or democratic functioning. Facebook recognised this and had banned political advertisements after the U.S Presidential Elections on 3rd Nov 2020, but then lifted the ban in the state of Georgia on 15th Dec 2020 (prior to the Senate runoff elections on 5th Jan 2021[21]). The ban returned on the day of the Senate runoff elections, as inevitably ‘Republican politicians and other operatives were using advertising on Facebook to target Georgia voters with misinformation in the final days ahead of the vote’[21].

To conclude, we return to the philosophy behind nudge theory, “Libertarian Paternalism”.

Before Thaler and Sunstein, nudges were often referred to as “Libertarian Paternalism.” But the theory behind it was the same:

  • Libertarian: People should be free to choose.
  • Paternalism: Attempts to guide people to perform a specific action, behave a certain way, or choose a product in line with their own good.[10]

There is a contradiction in this, as Libertarianism implies true free-will without any external forces which may impact someone’s choice. Paternalism on the other hand utilises behavioural psychology techniques to usher someone into making a choice. Even if it is “for their own good”, it is hardly of the person’s free-will.

Regardless, the concept of paternalism can still be accepted as aiming to “guide people to choose something in line with their own good”. With this in mind, it is fair to say that nudging users towards an unpredictable, ever-changing, high-engagement topic which has the potential of being political disinformation, is not for the user’s own good. Nudging users towards the goals of the highest paying bidder is not for the user’s own good. When nudging through choice architecture is combined with algorithmic recommendations, and “engagement” being the sole metric to be maximised, it cannot be guaranteed that the “nudging” leads to anyone’s good at all. It is the responsibility of the platforms to ensure they are doing everything they can to regulate their objective functions, ethicise their choice architecture, and to provide compensation when they fall short. Ada Lovelace Institute is taking very clear steps in assisting platforms and services on assessing their algorithmic systems[22], and hopefully this assistance will be welcomed more frequently. 

References

  1. Marco-Serrano, F. (undated). ‘The nudge and artificial intelligence’ [Online]. Essence Global. Available at: https://www.essenceglobal.com/article/the-nudge-and-artificial-intelligence
    (accessed 07/01/2021)
  2. Courtney, S. (03/01/2020). ‘What is Nudge Marketing?’ [Online]. Convertize. Available at: https://www.convertize.com/what-is-nudge-marketing/ (accessed 07/01/2021)
  3. (no author). (undated). ‘What is Availability Bias?’ [Online]. Convertize. Available at: https://www.convertize.com/glossary/availability-bias/ (accessed 07/01/2021)
  4. Information Commissioner’s Office. (15/04/2019). ‘Age-appropriate design code’, pg. 67  [Online]. ICO. Available at: https://ico.org.uk/media/about-the-ico/consultations/2614762/age-appropriate-design-code-for-public-consultation.pdf (accessed 07/01/2021)
  5. Ligier, B. (20/01/2020). ‘The Science and Secrets of Social Proof’ [Online]. Convertize. Available at: https://www.convertize.com/social-proof/ (accessed 07/01/2021)
  6. Bhargava, V and Velasquez, M. (06/10/2020). ‘Ethics of the Attention Economy: The Problem of Social Media Addiction’ [Online]. Cambridge University Press. Available at: https://www.cambridge.org/core/journals/business-ethics-quarterly/article/ethics-of-the-attention-economy-the-problem-of-social-media-addiction/1CC67609A12E9A912BB8A291FDFFE799 (accessed 08/01/2021)
  7. Andrews, E. (12/04/2018). ‘The Science Behind Cambridge Analytica: Does psychological profiling work?’ [Online]. Stanford Graduate School for Business. Available at: https://www.gsb.stanford.edu/insights/science-behind-cambridge-analytica-does-psychological-profiling-work (accessed 08/01/2021)
  8. Goldberg, L. R. (1993). ‘The structure of phenotypic personality traits’ [Online]. American Psychologist. Available at: https://doi.apa.org/doiLanding?doi=10.1037%2F0003-066X.48.1.26 (accessed 08/01/2021)
  9. Yurieff, K. (25/04/2019). ‘Facebook is cracking down on personality quizzes’ [Online]. CNN. Available at: https://edition.cnn.com/2019/04/25/tech/facebook-personality-quizzes/index.html (accessed 08/01/2021)
  10. Wintermeier, N. (28/07/2020). ‘Nudge Marketing: From Theory to Practice’ [Online]. CXL. Available at: https://cxl.com/blog/nudge-marketing/ (accessed 08/01/2021)
  11. Brodwin, E. (12/03/2018). ‘Here’s the personality test Cambridge Analytica has Facebook users take’ [Online]. Business Insider. Available at: https://www.businessinsider.com/facebook-personality-test-cambridge-analytica-data-trump-election-2018-3?op=1&r=US&IR=T (accessed 08/01/2021)
  12. Simonite, T. (18/10/2017). ‘AI Experts Want to End “Black Box” Algorithms in Government’ [Online]. Wired. Available at: https://www.wired.com/story/ai-experts-want-to-end-black-box-algorithms-in-government/ (accessed 08/01/2021)
  13. (11/02/2018). ‘Lotame’ [Online]. SaaSworthy. Available at: https://www.saasworthy.com/product/lotame (accessed 08/01/2021)
  14. Quiz Insights. (23/07/2016). ‘Quiz: What Color is YOUR Personality’ [Online]. Facebook. Available at:  (https://www.facebook.com/quizinsight/posts/859678477471555 (accessed 08/01/2021)
  15. Kalvapalle, R. (13/04/2018). ‘Facebook app “This Is Your Digital Life” collected users’ direct messages: report’ [Online]. Global News. Available at: https://globalnews.ca/news/4143810/aleksandr-kogan-this-is-your-digital-life-messages/ (accessed 08/01/2021)
  16. Ducklin, P. (10/04/2018). ‘How to check if your Facebook data was shared with Cambridge Analytica’ [Online]. Sophos. Available at: https://nakedsecurity.sophos.com/2018/04/10/how-to-check-if-your-facebook-data-was-shared-with-cambridge-analytica/ (accessed 08/01/2021)
  17. Hines, K. (25/01/2016). ‘How to Use Facebook Audience Insights to Unlock Buyer Personas’ [Online]. Sales Force. Available at: https://www.salesforce.com/ca/blog/2016/01/facebook-buyer-personas.html (accessed 08/01/2021)
  18. Personicx® by Acxiom™ https://personicx.co.uk/personicx.html (accessed 08/01/2021)
  19. @zero.emission.memes.2025 . (08/01/2021). Meme about online discourse. Instagram. Available at: https://www.instagram.com/p/CJx9DjHlNIT/?igshid=lwdhpyw33h0h (accessed 08/01/2021)
  20. PeopleBrowsr. Available at: https://www.peoplebrowsr.com/ (accessed 08/01/2021)
  21. Paul, K. (06/01/2021). ‘Facebook restarts political ad ban in Georgia following runoff votes’ [Online]. The Guardian. Available at: https://www.theguardian.com/technology/2021/jan/05/facebook-georgia-political-advertising-ban (accessed 08/01/2021)
  22. Ada Lovelace Institute. ‘Examining the Black Box’ [Online]. Available at: (https://www.adalovelaceinstitute.org/wp-content/uploads/2020/04/Ada-Lovelace-Institute-DataKind-UK-Examining-the-Black-Box-Report-2020.pdf (accessed 08/01/2021)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s