The New Literacy (Reading 14)

I recall seeing a post by one of more conservative friends on facebook about an article he had published. It was an opinion article from his local newspaper, in some small Kentucky town, arguing against offering computer science as a foreign language. It was probably the first political agreement we had. How was a computer program at all analogous to a foreign language? I always thought learning new languages opens up your world, and it’s best to teach kids when they are youngest, when they are incredibly adept at languages. Sure programming can give you a different way of looking at problems, but it rests on a few key concepts for practical effects whereas language underlies entire cultures. The NPR article mentions this legislation in Florida, where kids can choose to take computer science if they are performing poorly in foreign languages. That approach does not to do justice to either programming or languages, if they are both pushed to the side as a mandatory extracurricular, some extra thing to try if your good at it. Also, what is so great about learning CS, if it just expands the reaches of capitalism, teaching kids how to program more money-printing media machines. I could see how all the emphasis on coding could look like a nightmare to a small, traditional town in Kentucky. I highly value languages and literatures, so the surprising confrontation between language and the field I had majored in had me leaning right on this particular issue, longing for the days when jesuit priests administered slaps with rulers until their students mastered Latin declensions.

I have now had some experience teaching programming languages to students, which is just about the only thing I would be qualified to teach, and begun to change my tune. I can sympathize with the teachers, who are no longer prone to hit students with rulers, who thirst for the days when students actually jump on the concepts presented. The New York Times articles for this week show how students will actually be motivated to do much more work for a programming class, where they could play a game they created. After teaching for just a short time I can appreciate the simple joy that would offer an exhausted classroom leader. I was also amazed to see at what a young age students could comprehend sophisticated programming concepts at such a young age, once that had been presented to them through a simple format like blocks. I do not buy for a second the research by Dehnadi and Bornat into why certain individuals cannot learn software, despite their claims that they are not classist. The logic and structure is all there within the software we use; it is just a matter of how we present them. There are no number of tests that could conclusively show that certain individuals could never grasp those programs no matter how the classroom leader presents them. Young children, whose minds are so open and receptive, could pick up those sophisticated concepts they refer to as three ‘hurdles’ within an afternoon, never putting a name to what they have done. So I believe early exposure to programming is great, especially seeing how it brings new hope to certain communities (like the company in pikeville which I will certainly look into working with). Still, we cannot expect computer science to click with everyone and become a new type of literacy. Despite how sophisticated it is, we have only had the current framework of CS for about 50 years (starting with C? P. Bui will probably disagree with me), but there are epochs of varied cultural history for students to explore. If some student goes off and reads or paints during AP Java, don’t shove programming in their face. Expose them to the potential and let them make what they will of it. Many of the greatest geniuses couldn’t think within the established framework and had to make their own way, and that might someday include the framework of software.

The Legal Fiction of Copyright (Reading 13)

Joyce writes in Ulysses that paternity is a legal fiction; it is reconstituted, unlike the physical bond of maternity, to maintain cultural standards such as inheritance and lineage. We could make a similar observation about our copyright laws. Ownership is not a fact about our social functions but a useful fiction, one that we construct to ensure certain necessary institutions run smoothly. Just because I do not see copyright as essential, I am not advocating that we completely abolish ownership of any creative content. After all, if we had no ownership of ideas or media, there would no be no capitalist incentive to create anything imaginative at all. Never mind preserving culture, as the Atlantic writer Ben J. Edwards was very concerned about in his article about anti-circumvention and archives, we would not have any cultural production to speak of. At least, copyright laws are necessary if we want to see cultural production within a capitalist system. As much as we talk about how arts and culture engender important values about how to live and love others, the arts can only exist by somehow gratifying the bourgeoise, sitting alongside the pastry chef in that regard.

Given that these ambiguous, constructed regulations on copyright are necessary for any artistic production whatsoever, we have to ask what is the proper balance between capitalist incentive and general access to cultural items. Archives, for instance, are a major weak point for the copyright system. Without making all content universally available, some scholars or interested citizens require access to a wide variety of media sources to understand wholistic patterns or ideas within a generation of creative production. The Atlantic article mentioned that the anti-circumvention laws had been designed around DvD’s, hoping to prevent their wide digital replication. Now we have content that is purely digital, whether they be games, a slideshow, or just a creatively designed website. Archivists should be able to preserve such content, which is more voluminous than perhaps any type of medium in any area, for future study. I do hope we can somehow perform archival work on the internet, although I am not sure to what extent it is possible. Even a small-time website could be updated and revised more times than any major novel. I am increasingly worried that our channels for media are driven by capitalism and passing fads rather than respect for any tradition. Netflix is great if you want to see the show everyone is posting about on Facebook, but it left behind true film-lovers long ago.

Autonomous Vehicles: Building Utopias (Reading 12)

My father and his closest friend from work periodically latch on to some radical new technology and make major changes to introduce it. Adapting to new technologies is a form of recreation for them. They were the first people I know to abandon standard phone lines in their homes for google fi, and they seem to be competing for how many bluetooth devices they can carry on their person. Partly their interest in technology stems from their careers, both overseeing some R&D portion of a technology company. They have a vested interest in the next big innovation in tech. However, speculating about technology predominates most of their conversations, not just ones dealing with business. They get the same kind of thrill two likeminded people might have in discussing politics or gossip. In the modern day technology is not just changing the structure of the factory, and solving some common household problems with new appliances. Software technology has reached every aspect of our daily life, and we can all witness how quickly it is developing. We feel that a new piece of software might not just change how we work but completely restructure the way our society functions. That’s why I see technology moguls like Elon Musk almost becoming celebrity status, so we don’t just want to buy their products but celebrate or criticize their vision of the future.

The latest fascination between my dad and his friend has been the utopia of self-driving cars. This latest vision accompanies a mid-life crisis oriented around luxurious, long-term purchases. Sharing the same vision as the president of Lyft, John Zimmer, they believe all cars will be autonomous and shared amongst everyone. Therefore, they’ve convinced themselves that the next car they buy will be the last they’ll actually own, so, of course, they’re planning for a big purchase of a sports car, or some luxury car they can keep in a garage as a testament to the lost era of privately-owned, fancy cars. Despite the momentary satisfaction such a large purchase gives them, they couldn’t be more excited for this new utopia of all self-driving cars. Just thinking about it for a bit, I too am, admittedly, a bit enthralled. Just imagine never having the responsibility of driving, probably the main source of concern in daily social activities. If we consider a world where all vehicles are autonomous, it almost seems that humans never should have been entrusted with that activity, which is still a cause of so many deaths a year. Still, even if there would be statistically far fewer crashes with autonomous cars, stories like the fatal from the New York Times still terrify us. It sounds horrifying to be in a car you realize is not stopping for something you would have definitely anticipated. No one would want to be in that vehicle, but many would rather replace all unsafe drivers with an autonomous system. The Gizmodo article touches on this issue. We all want the self-driving cars to avoid pedestrians whenever possible, but nobody within the car would want it to sacrifice the driver to safe pedestrians outside. Still, that might be the type of programming we decide upon if we implement autonomous vehicles universally. This technology is not just a new set of products, some enjoyable commodity like the luxury car my father wants to buy. It will radically alter how we go about our daily lives and social activities.

Intellectual Chauvinists and the Machines (Reading 11)

Probably once a month I receive a call from my father about a new horrific vision involving AI. There seems to be a new book out around every month in the bio-psycho-neuro-eschatogical horror genre, usually sporting a few vatic quotes from the great seer Elon Musk, who certainly does not have a conflict of interest in these matters. It was refreshing to see a set of articles that did not resort to so many scare tactics to cultivate interest. For instance, the article by MIT Tech Review on AlphaGo prominently featured Demis Hassabis. He saw research into neuroscience more as clues to clever system designs than a bold expedition for the foundation of experience. I also appreciated that he reportedly gave Elon Musk a much-needed “anti-pep talk” about AI. The article quoted Elon Musk comparing AI to “summoning the demon,” as though there is some key to the seventh seal we just haven’t quite outlined in all our neuroscience data.

Tania Lombrozo offers a clever description of the issues we face with AI: we are still chauvinists about thinking. With computer technology suddenly surpassing humans in chess and go, we do our best to shore up the limits of human consciousness, suddenly caring about Descartes and other philosophers of mind. Koch and Tononi of IEEE somehow condenses the Cartesian reduction into a purely formal theorem they call IIT, as a definition for what consciousness is. Following this theorem, they ultimately conclude that consciousness cannot be achieved in the current hardware systems, but could be accomplished with neuromorphic hardware (shocker coming from the instituted of electircal engineers, that the solution would be found in hardware). They put forward a materialist argument that to reproduce consciousness you must replicate the physical system where it emerges. We are always delaying AI to the not yet, somewhat like Derrida’s Messanicity without a Messiah. Whenever computer programs appear to shatter the barriers set up around human consciousness, we propose new regulations that would establish more comprehensive limits. We are chauvinists of consciousness, warding away strange vigils of conscious life. When we talk about consciousness we expect to find some identifiable trait in our brains, like a certain pattern of neuron development. The problem could be that the term consciousness or thinking might not designate any clearly delimited characteristics. Wittgenstein says in his Blue Book that when we ask if machines can think, we could just as well be asking if three has a color. The word ‘think’ seems somehow misapplied, like we have reached outside its everyday use in thinking about new advances in technology. Thinkers such as Deleuze have abandoned any attempt to identify and characterize consciousness or thinking; the thinking subject becomes one of many interlocking machines propelled by some desiring force. As Lombrozo says, we often outsource our thinking to our physical and social environment. Consciousness is not neatly contained as we would prefer, whether drawing on Descartes or Husserl’s doctrine of intentionality, but interwoven with the immanent forces of the environment.

Searle’s Chinese room is fitting refutation of the Turing test because it stages that criteria within its true problem space: language. We may have a machine that perfectly engages with a speaking subject, but has that machine ever engaged with language as we do? Or simply followed given instructions? The image of the Chinese room reminds me of Beckett’s plays, which some read as occurring within a human head, an empty cartesian subject endlessly amusing itself. Language always seems detached from its true import, muddled in confusions and solipsisms, much like it is for the mechanistic subject in Searle’s Chinese room.

The problem of gender in AI studies has not been sufficiently addressed. The fascination with reproducing human life in technology goes back at least to Mary Shelley’s Frankenstein. Yes she wanted to scare us about technology, but the crucial detail is that Frankenstein wants to create life without a woman. There is a denial of the feminine that leads to his monster. I believe Beckett had a similar concern. When a theatre group tried to stage a version of Waiting for Godot for an all female cast (a play with five parts, all male), Beckett reportedly threatened them with a lawsuit if they carried on with that casting. This emptying of consciousness had a specifically male infertility, something that he did not believe could be conveyed without male leads. When Vladimir and Estragon consider hanging themselves, their greatest fear is that they will take on an erection and life will continue through a homunculus that sprouts from the ground below them. There are no sufficient measures to ensure this monstrosity called consciousness continues, and so, they wait.

Media and Technology (Reading 10)

The last fake news article I remember encountering on Facebook was, ironically, about Facebook itself. My mother actually posted it on my wall. It was a short article about the Facebook AI labs. Headlined with a picture of a tall white robot, the article reported on a failed language-generation experiment in their research lab. The picture, of course, had nothing to do with Facebook’s AI labs; I actually traced it to a tech booth hosted by TOSY, a Vietnamese robotic toy company. The article itself does not appear to make any false claims. It says that the Facebook AI research lab had to shut down their experiment when two bots starting developing their own idiosyncrasies of language generation, in a way “making their own language.” That is what happened, but coupled with the picture and the language that two AI’s were communicating in their own, invented form, the clear implication was that we had terminator bots secretly communicating. The researchers for Facebook, I have heard, were quite upset about this post, but somehow it stayed up on their own website.

This incident does reinforce that Mark Zuckerberg’s claim that Facebook is a technology company, and not a media company. No matter how their image might be distorted by the wild media conglomerate they created, they are making major advances in AI and other technologies. Still, as ambitious and innovative as their research might be, what purpose would it serve without consumption of media that channels so many advertising dollars towards them. As indifferent as they might to the content of the media, they would not create any technology without it. All this consumption and dissemination of media has become a bit like the material layer of production that underlies the superstructure of capitalist ideology. Supposedly not a matter of concern for the capitalist elites, but actually crucial for their way of existence.

In my last post I touched upon how the message cannot be extricated from its messenger. In other words, there is no pure content beneath its formal presentation. John Ashbery reaches a similar insight as he reflects upon the painting by Parmigianino, Self-Portrait in a Convex Mirror, in a poem of the same title. The self is not experienced immanently but as mediated, filtered through shifting sets of phenomena and associations: “How many people came and stayed a certain time, / Uttered light or dark speech that became part of you / Like light behind windblown fog and sand, / Filtered and influenced by it, until no part Remains that is surely you.” Looking at a distorted self-portrait, he sees how the self, or any fact of experience, cannot be presented purely and completely but only partially through some kind of mediation. In Emily Dickinson’s terms, the truth is always “slant.” Facebook has not come to terms with the influence their algorithms exert on media and the perception of facts. In the Vox article, about how Mark Zuckerberg is in denial about how Facebook harms our politics, they cited a conference with him where he stated that it showed a lack of empathy to believe fake news stories would sway someone’s voting decision. In other words, even if they had been exposed to these sources, they should stay peripheral to someone’s true beliefs and major life decisions. Well I would argue against him, citing that poem by John Ashbery. The self is not experienced immanently as a Cartesian center against which you can check all this misinformation, but at a remove, mediated through different aspects of experience. I would say it is a limited empathy to believe everyone has some privileged position from which to comprehend the significance of these stories. As Obama said in a speech, it stirs up this cloud of confusion where we cannot see the ground below it.

The interview with the Denver Guardian founder, Coler, was perhaps the most fascinating and disturbing article on this reading list. I couldn’t believe how nonchalant he was, especially since he had realized early on that fake news was a problem and planned  to invade the echo chamber. Later on in his career, he is one of the major disseminators of fake news, raking in the cash, and also in complete denial over the effect that it has on the public. He reminded me of Kurtz from Conrad’s heart of darkness, someone intrigued by the more obscure and pernicious aspects of his daily life, ultimately drawn in to those darker realms to become integral to their process.

 

Project 03 Reflection

I had not known much about wikileaks before doing research for this article. I had never been much bothered about this talk of surveillance, and I considered it more a form of entertainment, like a fantasy that we are all living in a big-brother dystopia just like we all read about. I do remember the Vault 7 leaks, and I will say that I did not object to the leak. I do believe vulnerabilities are left in our devices for the purpose of surveillance, and it is probably better to be made aware of that. Like Edward Snowden said, we should at least know about those activities and decide if that is what we need as a state. I do not agree with such arguments of national security, I think that is just the go to angle for pushing greater control by the state. It is a certain way of viewing the world, that we are under constant threat of internal attacks, that lets the government push an agenda of greater control. Information is the most rich and plentiful new source of power, and our whole state ideology will gather around maintaining control of our information.

Still, I am ultimately an opponent of wikileaks. I think your second question, whether we can separate the message from the messenger, captures pretty well my hesitation towards them. I do not see them as a pure conduit for protected information, a well-intended messenger sending out a startling, but truthful, message. I don’t believe separating the message from the messenger, as you say, is ever possible. Derrida says a similar thing about a section in Ulysses detailing a postcard in the hands of someone other than its addressee. The letter without an addressee is a free-floating signifier, not grounded in its practical purpose, and thus poses an interpretative problem. How can we assess the worth and meaning of a message without knowing the intention of its delivery? The information released by wikileaks cannot be separated from their political purposes. I was already suspicious of Wikileaks’ intended mission (I did not buy for a second that they are a non-profit), but I read an article before our podcast that really made me start condemning them. It was called “Wikileaks has officially lost the moral high ground,” from Wired. They showed how Wikileaks has clearly been aligned with alt-right circles and Russian interests in their recent activity. It makes a lot of sense. Their shock-and-awe tactics and defiance of big-government fit well with the petulant, conspiring circles of the alt-right. Sensational and directionless as they were, it was only a matter of time before a radical political mission swoops Wikileaks up as some badge of fearless honesty.

I was at first willing to sympathize somewhat with Julian Assange’s intended mission. I thought whistleblowing itself could be beneficial as a way to critique the standard ideology of state security and government control. I am still not opposed to the activity, but I believe it is impossible to separate that whistleblowing from the political motives that either underly it or carry it away.

Taking Apart Free Speech (Reading 09)

Before I begin I’d like to reflect on the absolute freedom I have in approaching this blog post. I do have a deadline for tomorrow, so I am compelled to write something, but have been given virtually no content requirements. The prompts for this reading gave me a choice of two almost antithetical positions, and I can formulate almost any argument in support of those two positions. What I cannot do is keep silent, end my post here. John Ashbery begins his book “three poems” with a kind of statement of purpose: “I thought that if I could put it all down, that would be one way. And next the thought came to me that to leave all out would be another, and true, way.” That is not exactly what readers want to hear ahead of a one-hundred or so page book, that the most important details might be left out. In a personal letter, Wittgenstein expresses a similar attitude toward his monumental Tractatus Logico Philosophicus, that the most important half was what he left unwritten. Still, both writers had to commit something to paper and publish it, even if they might have favored what they never put down. The need to respond is the one constraint every speaking subject must face; the primary restriction in any writing is the need to write, even when the topic is incommensurable to our system of language. As Beckett says in The Unnameable, even if we keep silent, we must consider what sort of silence we keep.

I almost titled this post “A Phenomenology of Free Speech,” a close analysis of what constitutes free speech and how we experience it. I chose instead “taking apart,” thinking that a careless ear might elide it as “taking away,” as we say rights or privileges can be taken away. “Taking apart” also suggests the hermeneutic method of deconstruction, the sulky younger brother of phenomenology. Deconstruction better captures what I am attempting with respect to free speech, not to delimit its essence but to destabilize it, and show that it has no true referent. In all the articles this week, they treat free speech as a positive entity, an act that can be performed and that is more readily available in the United States. This was apparent in the Atlantic article, “Why google quit China,” which casts the United States as the nation with some of the most permissive freedom of speech laws. Murong Xuecun expresses this mentality, that freedom of speech is a positive, identifiable condition, most clearly with his observation on China, “Only a small number of people sense what they are lacking.” He sees free information as a positive quality that can be lacked, something that is manifestly withheld by a carefully administered firewall. Although, yes, there are manifest restrictions to information flow, there are also restrictions around our own discourse that elude us. Notice in the BBC article on Facebook’s methods to control terrorist content the term that they completely leave out is censorship. By definition censorship is what Facebook needs to accomplish, to remove rhetoric they deem unsafe or toxic, but censorship is only applied in opposition to the positivity of free speech. The objectionable rhetoric for our society does not assume the ontological status of free speech, and so we do not say it is censored. We speak of free speech like property, as a positive quality guaranteed by law. As I tried to show through a reflection on silence and the need to respond, language is not an entity that can be completely grasped but is always concomitant with absence and indeterminacy.

Language is not just a capability, either impeded or free, but is our way of engaging in the world. Language itself is a kind of technology, and can only be experienced through a certain set of implicit restrictions. Amid all the debates about information technology, and how those structures can restrict speech, we have not sufficiently considered the ontology of language. Speech in itself is a kind of restriction, a set of predetermined conditions that are not fully controlled by a speaking subject but in a way speak through us.

http://www.bbc.com/news/technology-40290258

https://www.theatlantic.com/technology/archive/2016/01/why-google-quit-china-and-why-its-heading-back/424482/

Overcoding and the Act of Naming (Reading 08)

In  hackathons I enjoy formulating ideas the most. I can throw out possible goals for the next day or two without an eye towards minor practical concerns, and the group can stray where our imagination takes us. For one hackathon in Dublin, however, I chose to join because they had an idea I never thought achievable. It was a commendable objective: an app that could assist international asylum seekers with their case. The app could mine a user’s Facebook data to determine whether they were homosexual, and therefore deserving of state-granted asylum. Their Facebook account would not actually hold this data, for that would open them up to even greater persecution. The proposed app could mine the user’s various page likes and activity for patterns that generally indicate homosexuality. With machine learning the app could actually reach an understanding not accessible to the average viewer.

Incredible as that project was, I had to wonder how we arrived at the baseline statistics to train our model. Sure we could use the activity of anyone who had openly identified as homosexual as a training model, but how many of those users considered sexuality a spectrum. In person, they would provably have more to say about their love and aspirations than Facebook’s standard “Interested in _____”.  Before we even arrived at our problem, there was some arbitrary, even impossible, decision that set apart some particular group as the marginalized, targeted by a law, in this case altruistic, as a population deserving help.

Samuel Beckett once called this problem with categorization the “inherent barbarism of the name,” confronted in his reading of Proust. There is primary reduction in the very act of naming, reducing pre-subjective experience to a single temporal occasion. It is comparable with what Deleuze and Guattari call overcoding, where material flows are deterritorialized from tribal inscriptions and reterritorialized onto the body of the despot, a kind of origin story of the signifier, where the divine father, the despot, takes credit for all means of production. I could not help but smirk at one of the topics of discussion for this week’s reading, “corporate personhood” and think back to the Oedipalization D&G see at work in Capitalism. Capitalism deterritorializes flows of capital to extract surplus and reterritorializes that gain onto persons, the figures of the Oedipal triangle. A similar process is at work as massive corporations defend their right to campaign and fund particular candidate, drawing upon first amendment rights explicitly granted to individuals. The stance is that corporations form a collective interest that should be defended as free speech, even if that speech is an allocation of funds. Facing decentralization through the flow of capital, corporations uphold a façade of subjective intent, directing funds towards some symbolic purpose on the stage of politics.

What I fear most out of this pantomime is that the political stage will turn around and overcode the flows of production that underly it. The political stage will start to order the information, not just extract it. With the threat of the Muslim registry, the risk has to do not with privacy invasion but with an exertion of despotic, codifying force that congeals the flow of information into a list of persons, ostensibly falling under a certain denomination, ideology, sexuality, or identity (itself a kind of construction). I was startled to see that an entire book had been written on the connection between IBM and the holocaust. The method of encoding through punch cards was essential to the systematic genocide committed by Nazis. It makes me wonder whether Heidegger, who I wrote about last post, had witnessed the dependance of the Nazi regime (of which he was, disturbingly, a complacent member) on such computational technologies. Because his thought on technology is so valuable for realizing the impact and perils of computers, I imagine he was at least marginally aware of the expanding application of information technologies.

As far as the final question, about moral responsibility under corporate personhood, I would say that it is not so much corporate personhood but personhood in general that is untenable. There is no primary stage over which freedom of speech can be exercised. There are just innumerable, interconnecting flows of desire and capital, like the monetary funds that our law dubiously defends as free speech.

Sensors for Everything (Reading 07)

Microsoft puts forth some laudable goals in their story, “In the Cloud We Trust.” I respect that they prioritize maintaining privacy in their systems and complying with government regulations, even as they defend the rights of individual users when governments overstep their bounds in the unmarked territory that software opens up. Still, one proclamation surprised me: “we need to move technology forward, but we need to do so in a way that ensures timeless values will endure.” Timeless values I suppose includes human rights such as privacy, protection from harm and an opportunity to achieve. Whatever they are, the article includes them like an addendum to the most certain principle we have today, “we need to move technology forward.” What exactly does moving technology forward look like? Microsoft is a technology company, so their express purpose and sole way of proceeding is to advance technology in some way. How they are meant to progress is the perennial problem of these technology companies, and many careers are spent just thinking about what the next step in technology should be (R&D). Now that computers have become so versatile and reliable, the next area of development has been to put computers in more and more places, the Internet of Things.

What lead to this idea? I sure never thought I would need computational power to use everyday items, but I can see how it appeals to engineers testing household products in a lab. With all the effort that goes into user testing and validation, it would be great if we had all this data just from the millions of users living with these products. But as Heidegger might say, that doesn’t show us the essence of Internet of Things, eventuated by the essence of modern technology. I was just reading his article, “The Question Concerning Technology,” and one quote seems particularly applicable to this phenomenon of IoT: “physics…will never be able to renounce this one thing: that nature report itself in some way or other that is identifiable through calculation and that it remain orderable as a system of information. This system is then determined by a causality that has changed once again…It seems as though causality is shrinking into a reporting–a reporting challenged forth–of standing-reserves that must be guaranteed either simultaneously or in a sequence”(Basic Writings, 328).  He has in mind the Heisenberg principle in this passage, where modern physics has resigned itself to an inscrutability of representation. That resignation, to a kind of science that cannot be visualized or sensibly represented, occurs because technology demands a quantification and holding in reserve of resources, not letting things be as they are. Briefly, Heidegger’s project was to establish a fundamental ontology, an understanding of Being. He approached this lofty goal through a genealogy of being, inspired by Nietzsche’s genealogy of morals. So he shows how our understanding of being has changed historically, and not just remained as the same principle matter. One way being changes, or reveals itself in different ways, is through technology. Technology leads us to take things as things to be harnessed, collect them as something to stand-in-reserve. When he talks about causality, that was once the formal or efficient causes in nature, becoming a kind of reporting, so things present themselves as something to yield information, I couldn’t help but think of our push to put sensors in just about everything we interact with. The way to move technology forward is to gather more information. Then our machines can somehow adapt to our lifestyle and create value. After taking rivers and mountains as sources for energy, something else to be placed in reserve, we are now harnessing information from our everyday life. I’ve heard someone say before that data is the new oil. Now companies don’t just need to sell you the product; they need the information of how you use it. Consumers have now become the resources, so even with the horrifying exposé about Jeep’s vulnerability to outside hackers, I imagine companies will do anything to make us feel secure before removing those internet systems.

Surveillance and Confession (Reading 06)

“If you have nothing to hide you have nothing to fear.” That phrase carries me back to elementary school, anxiously tracing the patterned floor with my eyes and counting the students who made the slow loop out of the confessional booth and up towards the alter. I was next: with difficulty I lifted my feet out of the kneeler and shuffled over to the secluded corner where Father Gary sat. When I sat down I had solidified the words in my mind to such an extent that they tumbled out of me, like I had just turned on a recording. I never felt like I covered everything, but I had mustered up something to let me go home free.

I never quite knew if I had something to hide. If I did, how did I stop hiding it? I confessed sins that were unintentional but revealed later as wrong. For instance, I remarked to my dad as we drove home from church how strange it was that we drank Jesus’ blood, and he said, sternly “That’s disrespectful.” I sat in silence and contemplated how my statement constituted disrespect, and later confessed what I felt I had done.

Whenever I thought about my sins, I was considering how my words or actions had been interpreted, what message they had. Perhaps my actions were malicious, but when reflecting on them I had this lingering sense that they were a misunderstanding that I couldn’t set straight. So when I hear this phrase, “nothing to hide: nothing to fear,” I am reminded of my anxious elementary school days trying to situate myself amongst voices of authority, to ensure I am not hiding anything. It seems to me that these debates about surveillance center around a similar concern: how our correspondences are used, what status they have in this stage of international politics and telecommunications.

The article by arstechnica, “American Spies,” touched on the ambiguous political terminology around surveillance practices. A collection of data that made use of search term, for instance, would no longer be called “bulk,” which refers to the massive indiscriminate collection of randomized data. We don’t really have legalistic terms with sufficient granularity to capture the many ways we handle and manipulate data. I noticed this in the article about Obama’s talk at SXSW, where, though admitting that he was no software engineer, he kept wanting to apply metaphors of keys and locked boxes. Cryptography and codebreaking have propelled each other to the point that P and NP must be brought in to form a complete picture of the dilemma, and common metaphors won’t carry much weight. Microsoft has to map their software in the reverse direction, fitting its technical potential into existing legal models. They pleaded freedom of speech to allow notifications for users when a government intercepts their information. We never think of software features as an instance of free speech, but why should this private company be restricted from alerting users to information as it comes under their grasp? All these controversies remind me of what Baudrillard called a reality principle in distress: we are yearning to establish some stable ground for the drift of the signifier, to know how are messages will be used and what implications they carry.