Technology and its Privacy Implications

Technologies have always created societal concerns about privacy, surveillance and sousveillance. It is not simply an issue that affected society in the past as Baran (1967) suggested, but it will continue to have implications for society, the government and the individual. While Baran’s vision of a “centralised computing system” such as cloud computing did emerge in the next decade, he realises that privacy of the individual with regards to this technology is fragile and that “nothing more than trust…stands in the way of a would-be eavesdropper”. Morozov (2013) acknowledges that the global spread of digital networks has decreased communication costs and increased the efficiency of communication, yet he realises that this has created affordances for government “surveillance”, triggered by world events such as the 9/11 attacks and the rise of data-collecting corporations, such as Google and Facebook. When individuals as a collective initially embraced these new technologies, particularly Facebook and other data-collecting websites, there was little knowledge of the capacity of the same to use an individual’s personal information for marketing and other purposes. Baran notes that government agencies can co-operate with these data-collecting companies to acquire personal data for “security purposes”. Further, technologies which hold a large amount of personal data can monitor an individual so much that they can actively prevent or encourage individual action. For instance, in the lecture Andrew Murphie spoke about a program installed in Google Glass that will warn the individual if they are about to eat a chocolate bar and exceed their prescribed dietary intake. Similarly, Morozov gives the example of Google Glass “pinging” an individual if they are about to “stupid, unhealthy or unsound”. 

Conversely, technology and data allows the government to be distant from its citizens. O’Reilly coins this issue “algorithmic regulation” whereby information-rich democracies use technology to solve public problems without having to explain or justify their decisions to individuals, but by using data to create an “irresistible nudge” which appeals to its citizens. However, this means that individuals are unaware of the process involved and how social institutions work. I do not agree with O’Reilly’s notion of algorithmic regulation because it omits any accountability from government and there are such mechanisms in place such as question time in parliament, and the fourth estate, particularly journalists and programs like Media Watch.  

Furthermore, data-collecting technologies such as shopping websites, online surveys and credit cards are integrated into society so much that it normalises surveillance, weaving it into our everyday life. We do not consciously notice that data is being collected when we decide to buy items online, yet our digital footprint may mean similar items will be marketed to us through ads when we are browsing on the internet, as the browser collects our data. However, modern institutions have used the collection of personal data for the benefit of society. For instance, police can browse through an individuals digital tracks with online history to incriminate a potential suspect, or insurance companies can tailor cost-saving programs to benefit individuals. 

Nevertheless, technology and its privacy implications can both support and undermine democracy. It supports democracy in that it makes the government accountable through mechanisms such as online forums, the leaking of government failures which was originally not under public spotlight, and journalistic investigations. Yet it also undermines democracy by revealing personal data of the individual through programs such as Facebook and web browsers which government can acquire, and also the implementation of “algorithmic regulation” where the government can hide its political motivations by creating an impersonal barrier between the State and the individual.       

Models and Data

Paul Edward’s article ‘A Vast Machine’ explores the concepts and relationships between models, data, global data, data friction and infrastructural globalism. While some of these terms seem complex and difficult to grasp, Edwards enables us to understand common phenomenons that exist in the process of constructing models and data through the coining of these terms.

Firstly, Edwards bluntly asserts that ‘without models, there are no data’. It is the aggregation of data perceived as a whole that creates a model. Data is simple or abstract information that is collected and analysed. As acknowledged by sociologist Danah Boyd, we live in a world where data is everywhere. She likens the flow of data to a stream in which we are constantly adding, consuming and redirecting data, similar to the idea of a stream of consciousness. With the prevalence of the Internet today, the stream of data becomes global as anyone can add, consume or redirect data anywhere, any time. Yet what do we make of this continuous flow of data? If we draw our attention back to Edward’s earlier assertion, models make sense of data. This data has been carefully selected from the perpetual stream which the Internet provides, termed ‘global data’. His assertion ‘models… are filled with data- data that bind the the models to measurable realities’ reinforces the notion that models help us make sense of the world, and of the seemingly abstract stream of data that constantly surrounds us. The Internet magnifies this complexity on a global scale. Similar to archives, data can be sourced from the past and present, and helps us model the future.

However, the data that is collected and converged into models may not come from valid or reliable sources. This links to the idea of data friction, which Edwards describes as the effort of making global data. The notion of ‘friction’ refers to resistance or conflict between two entities, which people may encounter in making models. Some of these frictions may involve the accuracy of tools of measurement used in the past, faked data logbooks, misapplied standards or contradictory data. An example is the model of climate change, where different instruments were used to measure air, sea, ground temperature over time. This challenges the validity of the overall model which portrays past, present and (potential) future climate change, particularly because it converges different forms of data and no singular instrument was used, and hence the integration of such data may be inaccurate.

Another issue which Edwards raises is infrastructural globalism. This refers to ‘how the building of technical systems for gathering global data helped to create global institutions and ways of thinking globally’. While we may simply think of infrastructural globalism as constructs which allow us to attain knowledge on a global scale, Edwards specifically names institutions such as the World Meteorological Organisation and the Intergovernmental Panel on Climate Change as global infrastructures. However, there are infrastructures which operate on a smaller scale and may not necessarily belong to a physical place. For example, Weblogs and “citizen science” websites are also infrastructures which anyone, as a global digital citizen, can access.

In essence, Edwards reinforces that models are aggregations of data that help us make sense of the reality we live in. He acknowledges that this collection of data my not be accurately sound and the ‘stream’ from which we source the data may be tainted by outdated instruments and methods. In light of all this, models are the way in which we currently predict changes in the future, reinforced in his assertion that ‘we want to know about the past, what we hope to discover there’.  So while we may not have perfect models to help us understand the world, we can selectively hand-pick data that is valid and reliable in order to avoid data friction to create a global infrastructure which somewhat accurately explains worldly phenomenons.

Visualisations in Other Contexts

The magic of visualisations is that each one is unique in how they portray the desired message. They can be data-based, informational, scientific, or simple yet profound. They make the invisible visible and allow the social body to empathise and understand the issue the author intends to bring to light. Like the age-old adage says, ‘a picture is worth a thousand words’.

Other contexts such as those of a scientific, social and political/controversial nature can be placed on society’s agenda, given the effectiveness of the visualisation and appropriate media coverage. As tempting as all the links on the course outline look, we were told to look at a few links we found interesting and to answer the questions provided. On a side note, wouldn’t it be a lot more engaging and interactive if the links were made into a visualisation and arranged under headings on a huge mind map? Putting what we learn into practice! All thoughts aside, the links chosen (and discussed below) are ‘climate change deniers versus the consensus’ and ‘the 16 best science visualisations of 2011’.

Climate Change Deniers versus the Consensus (

This scientific, data-based visualisation explores the pros and cons of the climate-change debate through the placement of opposing arguments on different sides of the page with a somewhat ‘neutral’ scientific diagram in between the arguments to support the writer’s viewpoint. Graphs are imperative in helping readers understand complex data visually. For example, Al Gore’s famous “hockey stick graph” relates to rising temperatures and its distinct shape reminds viewers of its significance.

The 16 best science visualisations of 2011 (

The above website recognises the best visualisations from the 2011 International Science and Engineering Challenge. Visualisations do not have to be literally informative per se, but they can be a visual which describes in colour and perspective. Personally, my favourite visualisation from the website is ‘cucumber skin barbs’ because it is a fruit we are all familiar with yet we never notice the immense detail in the tiny barbs on its skin. Technology has heightened our vision to a whole new level so we can examine microscopic elements in detail.

So while visualisations do not have to be heavily data-based or literally informative, they can be simple, profound and even just pleasing to look at. Yet in light of all this, the best visualisations evoke emotion and invite readers to challenge their perception of the world.


A visualisation is the representation of data in an accessible and specific way. It makes the ‘invisible’ visible and allows us to detect patterns which would normally go unnoticed in daily life. For example, the website creates visualisations of trends, such as ‘the largest web page on the Internet: 7 billion people on one page’. Even with a small human stick figure depicting one Internet user, the visualisation would be 1.6km high and 250m wide. Visualisations help individuals to grasp the magnitude of data and statistics in a way which would remain undetected if given in straight figures.

However, visualisations do not necessarily incorporate tangible facts and data, per se. The simple placement of a large plate of peppers next to a small handful of hot chips to depict the enormous consumption of calories in fast food effectively conveys the same message without the use of numbers and figures. Further, this simple portrayal of facts invokes a sense of emotion because viewers can relate and understand how it directly affects them.

Yet with the endless bombardment of images we experience in daily life, visualisations may not necessarily invoke strong emotions in us. The ‘excessive mobilisation of public space’ (Virilio 2013, p. 90) in addition to the busyness of our lives have caused images to seem fleeting and immemorable. Moreover, the emergence of the Internet has provided a public place where huge amounts of advertising fill the empty spaces of websites and webpages. The technological development of the Internet has caused advertising to become viral and individuals are becoming increasingly immune to the unique charms of images and visualisations.

Nevertheless, image bombardment also occurs in a non-technological medium. As we walk around shopping centres or take a bus to university, we rarely stop to look at a pretty image. Although the visualisation may catch our eye for an instant, we prioritise the urgency of our own lives. Like Virilio (2013)  asserts, “it is becoming hard, even impossible, to believe in the stability of the real, in our ability to pin down a visible that never stops vanishing…giving way to the instability of a public image that has become omnipresent” (pp. 90-91). Society no longer has time to stop and appreciate the profoundness of visuals such that the act of experiencing images has become a hobby (such as photography). This suggests we only stop and enjoy visualisations when we put time aside to do so, or if we consciously make a decision to do so.

Another interesting point raised by Virilio is that the dispossession of sight is due to the “growing ascendancy” of image and sound. While image and sound is somewhat controlled in the public space by government regulations, individuals are still exposed to visuals that “bombard our imagination”. We begin to wonder whether we have the right of freedom of perception or a ‘right to blindness’, as Virilio suggests. 

In essence, visualisations may be simple or complex and can perform a variety of functions, from a dash depicting movement to a visual representation of the average journey time for Parisian commuters. They make the invisible ‘visible’ and create meaning for audiences in a whole new dimension. Therefore, visualisations are essential tools for individuals to understand concepts and relationships which would normally be harder to detect in society.


With the huge expansion of online publications infiltrating the world wide web, we realise the implications this has in regards to the legality, discretion, security and safety of society. Wikileaks is an example of an organisation which allows us to examine the dangerous nature of online publishing and the boundary between the public’s right to know and institutionalised privacy.

First, WikiLeaks is an international online non-profit organisation founded in 2006 that collects classified data from anonymous sources. The organisation offers an extremely high security mode of data input through a complex system such that whistle blowers’ identities remain protected. Julian Assange, the co-founder of WikiLeaks, described it as a way to “know to an unprecedented level what government is doing, and to let us co-operate with each other to hold repressive governments and repressive corporations to account” (2011). Essentially, Wikileaks is a high security online archive that focuses on government accountability. It acknowledges the significant discrepancy between the power and privacy of individuals and governments.

The rise of Internet publishing allowed Wikileaks to flourish, and many government secrets were leaked by anonymous sources. We begin to notice the implications this has on the welfare of individuals. An example that was brought into public light was video footage of the US military’s unjust treatment of civilians in the Middle East. Through these modes of publishing, government secrets are exposed and the public have the power to create uproar and dissent to the horrific malpractices of the US military. Further, it allows the public to realise the extent of discretion the US government holds and the enormous amount of information that is withheld from the public. The public become aware of how little they know about their government and whether classified information will directly affect them.

However, if the identity of the whistle blower was revealed, they may face punishment unseen from the public. They may be locked in a cell, interrogated, beaten, tortured, and deprived of his rights from the US government with no justification and protection. Thus, it is quite dangerous for whistle blowers to release information if their identities are leaked.

In addition, the exposal of government secrets may harm the welfare of other societies and individuals of other countries. If US secrets regarding the amount of money they spend on the Iraq War and the details of their weapons/ attacks are leaked, Iraqi extremists can access this information (WikiLeaks is an online organisation- ANYONE can access it) and harm civilians in light of this knowledge. The welfare of the institution itself is exposed, leading to many insecurities and potential terrorist threats.

Conversely, the publishing of classified information brings institutions with power under the public eye. In some ways, Wikileaks adopts the media role as the fourth estate, where the actions of powerful institutions are publicly followed and documented. While Australia supports the values of a Constitutional democracy, the rule of law and the separation of powers, there is a huge imbalance of power towards the Executive. Wikileaks acknowledges there is this imbalance of power and aims to hold the Executive accountable by bringing their discrete actions under the public eye. With the enormous costs and efforts involved in investigative journalism, it is a lot easier to publish information from a reliable (Wikileaks implements procedures to ensure the reliability of documents)  source, than to spend months investigating classified government information as an external public, where it will be extremely difficult to acquire ‘insider information’ .

Nevertheless, the current situation of WikiLeaks and Julian Assange speaks volumes about the amount of power that is skewed in favour of powerful government institutions. Julian Assange is currently seeking asylum from the Ecuadorian Embassy because he lost his legal attempts to avoid extradition to Sweden where he is answerable for alleged rape and sexual assault. Over 1.7 million US diplomatic records were leaked for the commons to view, yet it seems the only charges made against him were rape and sexual assault. It is probable that if the US Government lay hands on Assange, he will be subject to huge discretions of power and deprivation of human rights.

So while WikiLeaks adopts the role of the fourth estate, holds powerful institutions accountable and brings government discretions into the public sphere, the security of whistleblowers, governments and individuals are compromised. Even so, there is plenty of government secrets that remain hidden and it is debatable whether these things should be known to the public. Yet at the end of the day, there will always be a huge imbalance of power to those with status and money- the public can only hope they will be responsible and accountable to their actions. Like Uncle Ben says in Spiderman,

“With great power comes great responsibility”


Archive: An abstract or material object that is used to reconstruct and store the memories of past, build the present, and lay the foundations for the future. This is the definition in my mind when I think of that term. Sometimes there is an urge to delve into the past, to rearrange and organise how you perceive what has already happened. In his book Archive Fever, Jacques Derrida (1994, p.91) explains archive fever as “a compulsive, repetitive, and nostalgic desire for the archive, an irrepressible desire to return to the origin, a homesickness, a nostalgia for the return to the most archaic place of absolute commencement”. Derrida encapsulates this feeling of sentimentality with regards to archives. As such, let us examine the social, technological and other thought-provoking implications of archives.

As a child, my personal interactions with the world created an abstract archive for future reflections. In short, my mind was a storage space in which past experiences dwelt to be later drawn upon whenever I felt the need to revisit the past, or for future reference. For example, when I was young I touched the surface of a steaming iron out of curiosity. The feeling of intense searing pain was imprinted into my mind and since then I have known not to touch hot objects. It is the dynamic nature of media archaeology that allows archives (both mental and technological) to exist through space and time.

As I matured, I learnt how to physically document my memories and streams of consciousness through scrapbooking and photo albums. This material form of archiving allowed my memories to materialise into something visual. Further, as the Internet increased in popularity and scope during my years in high school, I saw archiving as an online database where I would pore over the chat histories of my crush in the hope they had reciprocal feelings for me. Yet ironically now when I think of archives, I see them in a non-technological way. At my workplace (I work as a legal secretary), there is a narrow, dimly- lit room upstairs with a rickety ladder and rows upon rows of dusty old boxes. The flickering light hangs off its wires and the enclosed space smells of rat droppings. That so called space is the archiving room.


The archiving room in my workplace

Archiving is the job that nobody wants to do. It requires heavy file-packed boxes to be taken upstairs and up the ladder into the archiving room and dusty ancient boxes to be taken downstairs to be destroyed. Parikka’s assertion “we tend to think of archives as slightly obsolete and abandoned places…swallowed up in the dusty corridors of bureaucracy, information management and organisational logic” (Parikka 2013, p.1) strongly resonates with me.

However, we must recognise that these physical archives are not only the main source of archiving. The Internet is an infinite, timeless space where past, present and future archives are stored. It is unique because it showcases ‘real-time’ archiving and governs the way we live. It is true that “we’ve all become accidental archivists” (Ogle 2010) as we publish content through tweeting, commenting and photo sharing, with no conscious thought that there is an online database which keeps track of our digital footprint. Ernst’s (1959) view on media archaeology is less concerned with its societal implications, but on the machine that formulates spaces to store “software-based cultural memory”. He draws on the notion of media artefacts as “an archive of cultural engineering by its very material fabrication—a kind of frozen media knowledge that…is waiting to be unfrozen, liquefied”. An example of this is the ‘archive’ button on Gmail, which provides users with the option of preserving emails which can be retrieved later as opposed to permanently deleting them. However, Ernst’s views have been critiqued as placing an unnecessary emphasis on the apparatus, which creates the danger of separating the machine from human interactions. Yet we must acknowledge that with the microtemporal nature of the Internet, “the length of storage is becoming increasingly more short-term” and thus conscious efforts must be made for media archaeology to endure longer periods of time.

An example of deliberate archiving to endure the constraints of time is exemplified in the website Apartheid Archive. It is a space where South Africans can publish their past experiences of everyday life during the Apartheid era, with the purpose that future actions of society will not reflect the racist and discriminatory period of the apartheid. This outlines the temporal capacity of archives to store information which resonates through past, present and future.

I also really liked the Beating the Odds initiative set up by the ABC, where videos documenting the problems of low socioeconomic societies (e.g. social exclusion, lack of housing, unemployment, discrimination, crime, family breakdown, neglect) were collected and published on the website. This creative form of archiving ideally raises readers’ awareness of problems outside the wealth and comfort of their society to spur them to act and help those who are less fortunate.

So in a nutshell, this week’s readings challenged me to consider archiving as not just a space where digital and physical data is stored, but also the temporal aspects of archiving, its huge (technological and social) importance in society, and its resonance through time.

Of Books and Assemblages

The invention of the printing press in the 15th century consisted of a complex assemblage of various components such as moveable type, moulded copper parts and oil-based ink. It revolutionised how society published material and expressed ideas. Gutenberg’s invention of moveable type allowed for the mass-printing of books. In itself, the book is an assemblage- it consists of a cover, prologues, prefaces, glossaries, acknowledgements, chapters, sub-chapters, headings, paragraphs, sentences, epilogues, blurbs and indexes, to name a few. The transition from the scroll to codex enabled a more organised, open kind of arrangement and segmented the book so individuals could bookmark or read/re-read desired parts of the text. This mode of organisation was an effortless, seamless process and a practical way of reading text.

Assemblage does not only apply to books. Bruno Latour (1980) theorised that many human and non-human components constitute a social assemblage and examined the relational ties between the ‘actors’. He named this idea the Actor-Network Theory, also known as ‘ANT’. The most controversial aspect of his research was his treatment of animate and non-animate objects as equal.

Below is an example of Latour’s Actor-Network Theory and how it plays out in the assemblage of a book, for example.


In partaking in this simple exercise of brainstorming the animate and inanimate actors in a network, I realised that the network isn’t simply one big network, but an infinite network of networks, which could look something like this:


I only documented a few aspects involved in the production of a book and elaborated on those aspects. If every network involved was mapped out in detail, it would create I daresay an infinite network with plenty of intricate ties between the actants, both human and non-human.

While I was mind-mapping the network, I discovered a few things about Latour’s Actor-Network Theory:

  1. Networks are of varying sizes. For instance, the printing press (as an actant of the book) is a huge network which includes editors, printers, publishers, ink, moving parts, copiers, paper, etc. On the other hand, chapters are also an actant yet it has a minor function in comparison.
  2. There is overlap in networks. Paper is a component which is considered an actant in the printing of the book and also in the process of designers sketching illustrations for the book. It was evident there was repetition of materials when I was drafting the network of networks in a book.
  3. I do not agree with Latour’s Actor-Network Theory. Latour theorised that human and non-human actants are equal. I disagree with this crucial aspect of his theory because humans are the driving force behind the non-human actants. Without humans to manage and work the inanimate objects, they are not functional. For instance, a sheet of paper by itself does not contribute in the making of the book. A sheet of paper is only functional if there is a person to feed it into the printer, add text on its surface or draw a picture.
  4. Leading on from the previous point, I do not think all the actants are equal, within its subgroups of animate and inaminate. An example regarding inanimate objects is the essentiality of ink and book covers. Evidently, ink is more important than the cover of a book. In fact, ink is required for the title to be printed on the cover of a book. A book can function without its cover, but there would be no such thing as a book without ink- it would just be a series of blank pages. Therefore, I do not agree with Latour’s theory. I do not dispute, however, that all components contribute in the making of a book, but it seems to me that some actants have more important roles than others.