Object-Oriented Futures

Digital Rights in an Era of High Technology
By: Jacob Taylor
on for GS401

This is the paper which, ideally, is the culmination of my education at this institution. I hope I've done some justice.

This is the powerpoint accompanying the presentation I gave, of this paper.

Technology is changing the world quickly. Moore's Law, the idea that computer components will get twice as fast and halve in size every two years has been met almost every year since that prediction was made in 1965[1]. That means our devices are getting smaller in general, but if certain components are staying the same size (like the cameras in our smartphones), they're increasing in functionality rapidly instead (top-shelf smartphone cameras now have optical image stabilization, which was previously relegated to digital cameras, and before that professional SLR cameras). As this shrinking continues to happen, the capacity to embed a computer in an object increases. Even now, we have smart watches (which are now capable of operating without a cell phone nearby[2]), which will in a short time be fully-fledged computers which occupy only the space of a wristwatch.

The devices (they are devices now, not merely objects) around us are increasing in number and sophistication every day. Some of them can communicate with each other, while others cannot. With the advent of the Bitcoin blockchain[3], these devices will be able to both communicate with each other (even if they aren't specifically integrated by the company producing them) by passing messages back and forth, but will also have a shared record of that communication. A blockchain is like a pile of papers, where the only actions that can be taken are to read the pile of papers in order, or to add another single sheet to the pile. The pile is shared between all the people and computers who are using that particular pile of papers to amass links to knowledge (or other information). It costs money, or time, to add a sheet of paper to the pile; this creates a financial incentive to guard the integrity of the papers, and to ensure that when the papers are duplicated, the duplication is faithful. For people, this becomes a way to store, exchange, and codify their world-views in software. For devices, it will not be much different. If you imagine a person's memory being stored page by page in a gigantic shared binder, where everyone who opts-in can add their memories to the top of the collection of papers, and anyone with access to that binder stores it, replicates it to others, and contributes to it. Every memory is archived and shared between a bunch of other people (who also store and share all their memories). This, combined with other technology advances will soon be a very powerful force on what the future turns out to be.

The part of this change that I want to talk about is the way it may force humans (humanity, in the largest sense) to reconsider the way we view the world. Right now, many (at least in these United States, and certainly places outside it) people consider Nature to be a resource for industry, something which humans can exploit. More politically or consciously progressive people think about the sustainability of this exploitation, and so suggest industry should be careful of using up all those resources, and to plan to maintain the resources in some way (loggers planting new forests to replace those cut down, for example). It is the suggestion of this paper that the view presented, that nature is exploitable, is a contested view. Contested, not in the way that some people contest that we should not exploit the resources, but contested in the way that this entire manner of thinking about the world may be gravely wrong. This paper will argue that an object-oriented ontology is a better way to view the world, that it will allow us a much clearer view of the future, and much greater utility in thinking about that future.

First, let's start off with where we are now – which will include current technologies, likely advances, and a philosophy section to describe what an object-oriented ontology is – and then a section on what the implications of all of this may be, both within the current nation-state system and within the broader context of "what will this do to humanity?".

Where We're At


Current technologies are developing rapidly. Moore's law is still intact, and by some claims it has been surpassed by computer processor makers (the part of a computing device which functions as the brain, doing the thinking, but not storing any information). The current and future technologies that will help shape this future are various. Improved battery technology, 3d transistors, and blockchains are important. 3D transistors (small parts of computing device processor which turn on and off to do the work of calculating maths) will enable two things: much more powerful computing devices (if the size of current objects is maintained), and much lower power-consumption computing devices (if the advances are used to shrink the size of devices). As always, both will be true (a balance will be struck between reducing power usage and reducing size). With better batteries, those devices will last longer without a charge – and we already have small sensors which can be placed on things and report data (their batteries die quickly, and functionality is limited at present[4]). Blockchains are one software technology (of course, there are others) which may allow unrelated devices by unrelated manufacturers to share knowledge, communicate, and coordinate actions[5]. This combination alone will be powerful, but of course these are only three technology advances.

The best example of how things are right now is the marketing term "the Internet of Things"(IoT). The IoT is what corporate marketing departments are calling all of the objects with integrated sensors which are, and will soon be, an increasing presence in our environment. For clarity, this data is often sent to "the cloud", which are collections of computers that corporations run, which they allow other people and corporations to pay to share the resources of. The Internet of Things is all the devices which are (and will soon be, in increasing number) contributing data for analysis by these shared (cloud) computing systems. Right now, the IoT includes things as varied as thermostats, building sensors, home security cameras, light bulbs, traffic signal lights, and even industrial control systems. Additional Internet of Things objects also include what are termed "smart buildings"[6], which manage their own resource (electricity, water, lights, air conditioning and heating, and so on) consumption via sensors, building management software, and control systems. At this point, smart buildings are already cognizant of their inhabitants, and those inhabitants' schedules. The building can coordinate heating, cooling, and other resources to ensure a comfortable environment for the humans within it. The building is aware of their comings and goings, and automatically adjusts its systems accordingly. This cognizance and flexibility will only improve with time.

One further important note (before the philosophy section) is on the idea of object cognizance[7]. Objects (take a smartphone as an example) can, in a limited sense, understand the world around them through sensors. These sensors and that data can be used in combination with other computing devices (cloud computers, for example) to assist in accurate processing (understanding) of what's happening. Just as we have eyes and ears and nerve endings to perceive the world, our digital objects have different sorts of sensors through which they perceive the world. The "improved location accuracy" offered in most smartphones allows them to use the list of, and distance to, nearby WiFi networks to improve GPS accuracy. Current well-tested sensor technologies include accelerometers, GPS, WiFi, Bluetooth, temperature and humidity, LiDaR[8], microphones, gyroscopes, RFID[9], NFC[10], and more. Most smartphones have a large number of these sensors built into them already. These sensors, with dedicated processors and assistance from other "cloud" computing devices, allow objects to consider the world around them. The applications of this can appear simplistic, such as trying to give people location-relevant dining suggestions[11], but anyone with a smartphone now walks around with a collection of sensors that constantly scan the world around them, in an attempt to understand where and how the smartphone currently is. As the processing power of devices improves, the batteries improve (which will either support longer-lasting devices or more computing power, or a bit of both), devices will become more cognizant of the environment around them. This is what's meant when I say object cognizance – the ability of our objects (smartphone, in this particular example) to take notice of the world around them.


The non-technological advance that should be noted here is what's called an object-oriented ontology (OOO). In philosophy, ontology is the study of existence – or how things "be". The philosophic study of ontology looks at questions of how things exist, how things interact with their environment, and how those things perceive their existence in that environment. It does not focus on how those things know (called epistemology), but on how everything perceives everything else (everything on earth, and off). In an OOO, the ontology is flat. What that means is that there is no superiority, precedence, or privilege given to any particular thing's style of existence (being). As we think of it, a chair, a human, the color purple, existentialism, and climate change are all objects. It's a bit funny to think about, but in an OOO each of these things, some ideas, some "objects", some people, all have their own unique way of existing, of being in and observing the world around them, interacting with it, and so on. In an OOO, humans do not get to determine what "counts" for how "existing" works for everything else because we're just objects, too! As part of this change, we begin to talk about how objects (smartphones, Buddha statues, etc) have autonomy (can make their own decisions, with little or no external influence). Nature ceases to be something which can be exploited – because it's not something we own, manage, derive rights from, or any other such thing. Nature and its constituent parts all exist equal to us, having their own autonomy.

This shift in thinking breaks what is currently called the "subject-object" relationship. The subject is the eyes through which the world is seen, acted upon and in. The subject is granted human characteristics, full autonomy, and so on. The object is subject to these actions and that vision – meaning we do not consider the object to be fully human, or to have autonomy, or an opinion worth considering. We think that way towards chairs, currently, because we do not expect a chair to respond to our questions. This may change, soon. Put another way, the subject's existence is privileged above the object's existence. We (humans) do this every day, where we privilege our observations, thoughts, understandings, and ways of being over everything else on earth. We treat Nature as something external to us, an exploitable resource. Perhaps we choose to "manage" that resource in a sustainable fashion, but our relationship hasn't changed. We still, in how we speak and think of Nature, frame our power relationship with it. Nature has never had a say in the way we relate to it outside of those few who advocate within our societies directly on nature's behalf. We see nature, we determine what are problems, solutions, and other actions to take either within nature, or on its supposed behalf.

The reason this philosophy will be important in the future goes back to the proliferation of devices which sense the world around them, which have a sort of cognizance of their environment. As the actual technologies improve, it will be increasingly untenable to continue holding human observation as the gold standard by which we judge reality. The sensors in objects may soon be collectively better than certain human senses – and they will have much more computing power at their disposal, able to pick up on patterns or events that are imperceptible to humans. This directly challenges our notion of observation, as we will not be nearly as effective as these devices at taking notice of things in the world. This is important, because right now it's still the notice a human gives to something that gives it meaning, or puts it into existence. The old thought-experiment question "if a tree falls in the forest and nobody is around to hear it, does it really make a sound?" brings this out quite quickly – if nothing observed the existence (of the event, in this case), it may not actually exist. Put another way, something must be observed by a human for it to exist (this is otherwise called positivism). While philosophy has in some areas moved on from this question and this conclusion, it appears that a large number of people still think this way (purely from observing the policies, actions, and conjectures that people make).

Timothy Morton proposed one extension to the concept of an "object" – a hyperobject – which is an object so large that it transcends space and time. The best contemporary example of a hyperobject is global warming. It's a process that affects the entire earth, isn't limited to any particular weather event, and is only slightly observable by humans over the course of an entire human lifetime. It is, however, easily spotted by networks of sensors (computing device "eyes"). Humans observe global warming over human lifetimes by modeling the data collected by these weather sensors, and not by direct observation. It's feasible for a human to understand the changing climate at one specific place by looking at a historical data set for that place, but it's infeasible to understand the process upon the entire earth that way (except within a computer model).

What an Object-Oriented Ontology does is erects a system of pure democracy around ways that things exist, and ways they observe that existence. Because no way of existing or observing can be privileged over any other (what is in human societies called hegemony), we step into a democracy of objects. What we do with this democracy is up to us – if all the modes of existence are drawn upon, we will have many new interpretations of the world (world-views) recognized as valid. In some senses, this would be an even purer form of democracy than we've ever had. Democracy, as an ideal, is just that – a (structurally) flat way of dealing with all the many people in a region (town, state, nation), where everyone's voice is roughly equal and decisions tend to be made by consensus of those governed.

A Democracy of Objects?

The internet is, in its purest form, an actual utopia. Anything is free to connect, to send whatever data they like, to wherever they like. Communication is information, and information is not advocacy. It's a neutral information transmission substrate. Intermediate devices can choose to pass on data towards its destination, or not. Outside of particular websites like Facebook or Google, this is the rule. The internet was designed this way, and most people making new internet stuff since then have mostly followed those maxims. The purity of this vision is complicated by very real jurisdictional issues, – what some call the political layer of the internet – where some companies running internet services do not want to work with others or are otherwise discouraged or prevented by law from doing so, or from passing on certain communications. Youtube, for example, is blocked in around a dozen countries at any one time (the list changes based on the political environments). But, here, I mean to differentiate between the internet and the web. The web is what you can access through a web browser – think of various social networking or banking sites. The internet is all of everything, including stock trading systems, industrial control systems, nuclear power plants, solar panels, luggage[12], thermostats[13], high-volume air-conditioning systems, weather stations, clocks, and yes Facebook and Google (and all their myriad servers), all of which can connect and exchange data with other systems.

What makes the internet a utopia is how perfect it is in some ways, and deeply flawed in others. For example, any thing can communicate with any other internet-connected thing, as much or as little, as rarely or frequently as it wants (or its programmers/operators desire). Communicate too much or too often, and you have a denial of service attack, as it's called, where a device floods another device with so much communication that one or both can no longer speak. But this communication, just as in humans, is how decisions are made between objects. It should be feasible in the near future to build a democracy between our objects, for resource allocation and other sorts of decision-making. The software for such a thing is already being prepared – IBM is working on a system they call "ADEPT" which even in its first version will allow a washing machine to coordinate the purchase of new detergent just before it runs out. So, even if it's not "democracy" as we think of it, the blocks are being placed for distributed decision-making between objects.

The Political Layer of the Internet

There is, within the technology and civil rights communities, a debate ongoing surrounding whether or not computer code counts as speech. I'm sure that whatever ends up as the final conclusion of that debate will be more nuanced than "Yes" or "No", but this is a part of our future. Writing code may at some point be widely considered to be a political act (just as some other acts of speech are). There are organizations which do this already, where they write code as political speech[14].

The fact that cybersecurity is a burgeoning industry[15], and is attracting the support of various governments, is indicative that governments are thinking about the future of technology – that governments are currently attempting to decide how to determine what security (to them) really looks like online. They are, rightly, concerned with the safety and security of their populace. The US Department of Homeland Security[16] recently added one company's product into a special list, where if you're running their products you (as a company) cannot be found liable in court for "cyber terrorism"[17]. This is what it looks like when business and government try to figure out how these new realities will be handled within the systems of ideas we already have.

These old ideas become even weirder when applied to things that democratic states traditionally do – like administer elections. You can jump directly into questions like "What does it mean when a rival nation attacks a neighbor nation's online voting system during a vote?"[18], which is somewhat difficult to answer on its face. It's not physical violence or destruction, we already know how to handle that (militaries and diplomats are skilled with those sorts of actions).

These small examples lay out the political layer of this transition to a digital world, including how we grapple with objects having a cognizance of their world. When politicians speak of cybersecurity, part of that conversation is on who has access to the data which devices are emitting, outside of whomever owns and operates the device. Or, to put it another way, who actually owns the data is undetermined. Due to Eric Snowden, we know that the US National Security Agency lays claim to a duplicate of most data on the internet, for intelligence purposes. This represents a transition to a a mode of shared ownership which is not obvious to the primary owners of this data.

Where We Might Go

I argue in favor of a sort of digital citizenship. This falls within the nation-state, as a transition technology (things that work within the currently broken system to make the change to a less-broken system a little less of a smack in the face). It seems at this point inevitable that Silicon Valley specifically, and technologists generally, will figure out ways to embed computing devices in most everyday objects, and to have those objects able to communicate (in some sense) via the internet. A concrete suggestion that falls (somewhat awkwardly) under the purview of current (hegemonic) world systems would be something like using your Gmail account to identify yourself. Envision a future where, when you are passing through security at the airport, you could present your Google or Yahoo! account credentials (not the password, but a mathematical assurance that you are you and that the account is yours) to board the plane. All the airport would need to know is that they can trust the assurance (and they can, the math is good, and so is the company as a third party arbiter). They would not need to know who you are, where you live, or what sort of car you drive. They can match up the assurance from when you purchased the ticket (using the same account) to the one presented at security, to verify that they match. You can then board. Such a system would be somewhat difficult to implement, but due only to politics. The technology is not insurmountable if we (humanity) decided we wanted such a system.

Additionally, because this would be legible to the state, it's a system which could easily be used to identify objects (computing ones, at least) to the state – the first step in granting any sort of rights or recognition to anything or anyone. What's even more interesting, is that this could be deployed using the blockchain technology mentioned earlier, so that these assurances can be both stored and verified by disinterested third parties (their interest is in the validity of the stored information).

A centralized version of this exists already, on a smaller scale, in Estonia[19]. There, citizens of the country identify themselves to the government (online) via a "smartcard", which provides the government a similar mathematical proof that the person is who they claim to be. In the case of Estonia, these cards, and proofs, are used to deal with taxes, to vote, and other such government ↔ citizen interactions. They are also centralized, so there is a government agency or sub-agency responsible for keeping track of all the people and cards, and running the infrastructure used to facilitate this. The usefulness, however, is that a decentralized version could be reasonably created using a blockchain, instead of a government agency, disintermediating the government from their own responsibility (identifying its people). With a blockchain, the people would be their own identity authority. It also allows agility between systems, because blockchains aren't singular (blockchains are a technology which underlies Bitcoin, and there are many different "alt-coins" or alternative bitcoins in existence).

It would be a great way to add convenience to our lives, while also allowing the billions currently joining the internet the convenience and protections that we have refined over the past 40 years of having and running the internet. Just as many developing nations are skipping things like desktop computers and physical banks (jumping straight to smartphones and online money systems), so too could we facilitate this sort of a jump in identification technology. This would go a long way to dealing with the major issues in tech[20] with adding the new billions which are coming online right now, not least that many people have multiple names, notations for those names, and unpronounceable or unwritable names.

A system which covers part of this is named ethereum[21]. It's targeting "smart contracts", which are digital agreements between two parties (human or machine) that can be automatically fulfilled and marked as completed (services delivered, or what have you) by disinterested third parties. No thing, or person, involved with the agreements need have any role in verifying the fulfillment of the contract. This particular technology is related to blockchains (they spawned from it, historically). One of the first applications of this concept is a social network named Synereo[22], which makes real an economic model based on collective attention (and where you can actually cash out that attention).


This change would be really rather large, and is only one step among many necessary to shift our world-views to match the realities of our technology. Perhaps the point of primary importance, is what was already noted – this transition is legible to the state. Identity would become as malleable in computing and statehood as it is in reality, and represented properly as such. The work this transition would take would also be a great first step towards considering nature to be worthy of rights (though some nation states do this already, granting some sorts of limited rights to nature). But more importantly than either of those two points, we would re-level (as in flatten) a lot of our political and social discourse by default. The new way of thinking about the world and our place in it, alone, does that. We can use the flatness of the internet to flatten the power relation of anything as it comes online, as we bring "legacy" objects into the digital world.

The adaptations states would make to deal with this new identity system would eventually render them unrecognizable to the present. We could have true digital democracy and easy identity (with less or no politics around who can be identified (see: immigrants and their (lack of) rights). Part of this is happening, now: the referenced article in ArsTechnica about Estonia's ID system says they have launched a program to give out Estonian ID's to anyone, anywhere. You don't get citizenship, but you (or your business) get the mathematical identity assurances present in the system they've designed. This is a first step in deterritorializing the state, while disintermediating its primary functions to society. That's not to say that states will go away – nothing is further from what I'm suggesting – but that they will change significantly.

Some of the other interesting effects on contemporary ideas are the changes this would make to human rights discourses. Human Rights are essentially a harm reduction system for the violence of the state. Almost nobody has them, in actuality, if they're not rich or propertied or famous. This transition may also dislodge the staid discussion of natural rights, those rights only ever held by landed or monied white men, historically. This transition is no panacea, but it would reduce lots of the harm in lots of areas that come from the state.

One direct change in how we govern ourselves could come from using something like IBM's ADEPT (which has not been seen publicly, so this is just based on their public documentation), adapted to form a decision-making layer for the internet. This very quickly begins to look like an object-oriented governance system. Because we'd be basing this on a blockchain (or future similar technology), there would not necessarily be a state which could determine the validity of any decision – because in a blockchain, everyone in the network of computers comes to consensus about the validity of a decision.

One additional, final, important side effect of this philosophic shift is that we'd make great strides towards no longer developing our societies, politics, and governance structures, around those who have access and those who do not. There are socioeconomic, racial, ethnic, and technology facets to this currently, and they all end in people being included or excluded based on accidents of birth or location. This is not to say none of that would exist, but with a change in our frame of reference, I posit that those cases would indeed be "bad apples", and no longer the backbone of the western world.


This is the start of a discussion, and not the end of one, but we must begin thinking about what the future will entail. It will behoove us to think about what technologies we want to develop and why. If we don't, the current mainstream path where the military does much of the R&D for technology will end with robotic enforcers and flying robot assassins, and will look rather like the dystopias outlined in science fiction books. These technologies will change everything in the world, and a few things outside it as well. I implore everyone to think about how they think about this future.

Literature Review

Beginning in 2006 with Quentin Meillassoux's After Finitude, Speculative Realism as a philosophy was born. Translated into English in 2008 by Graham Harman as Quentin Meillassoux, Philosophy in the Making, this work sets into motion a concerted attempt at rejecting a core part of the philosophies embedded in western society today. Most specifically, the prominence and privileging of human experience and knowledge over any other object's experience or knowledge, is rejected. Additionally, the idea that you can know an idea and its physical representation but can't know them separately (née correlationism, this is from Kant), is rejected. This results in a so-called flat ontology of objects, where all things are truly and ontologically (existentially) equal. Chairs, iguanas, and the color purple, exist independent of human capacity to observe them. It is a categorical rejection of the subject-object relationship which plagues human societies today[23].

The philosophic underpinnings for an object-oriented governance are being developed as you read this. Anthropologists will soon throw out the concept of scale[24], because it is flawed in the same way that concepts of Nature[25] and Matter are flawed. They're anthropocentric. Untenably defined. They are defined in relation to a subject(ive human) existence.

I fully expect the concept of governance to be significantly changed by this process, but I will keep using it. A politics based on an Object-Oriented Ontology is still something to develop, because objects will always relate to each other in some fashion. They must come together to create societies, to make decisions, and to share resources. The significant part of this is to redefine governance such that it no longer implies some sort of top-down system of social, political, or economic control.

Transition technologies are already in place. Much of the literature written before Speculative Realism was developed stretches starts in the 80's focusing on corporate/organizational/good governance, modeled around networked cybernetics[26]. Much of this used, and worked with ideas present in Latour's developments surrounding what is now called Actor-Network Theory. These ideas are extremely good for analyzing networks (and networked societies) from the current correlationist frame of reference but they lend only analysis tools, and not building blocks, for an object-oriented future.

They are indeed stopgap transition technologies, though, and as the literature speeds into the 1990s, the focus progresses into networked organizational structures and their relations to industry and government[27]. Rhodes specifically covers governance in the context of a "socio-cybernetic" system, the understanding of which informs the language used to describe the systems later. This model of "governance" is also the one used to govern the internet. There is a centralized policy statement, but public and private groups come together to negotiate the policy, and more-local organizations and individuals are responsible for implementing it (best effort). It's not compulsory in the sense of using force to ensure it is done. It's a voluntary representative panel of people with respect, but not any sort of outright authority, at least in any classic sense.

By the 2000's and more recently, the literature has synchronized with research into complexity theory and better understood models for networks (neural and otherwise). Cybernetics research, previously sidelined as interesting but nonetheless fruitless, has been given credence[28]. The set of research on cybernetic governance that occurs previous to 2008 (and for the researchers who did not read any of the aforementioned speculative realist work) provide a useful base of theory and information to reconceptualize, using an object-oriented ontology. The intent is to get rid of the anthropocentric parts of the material, and recontextualize what can be in a universe solely of objects. The result will be a system of governance whose structure is flat like that of the internet, suitable for governance of the internet. But the internet, here, is just an analogy. The internet is the object oriented ontology, because it is one. As it merges more completely with every part of our lives, governance of that creeping reality will become crucial. This will set out a framework with which to do so.


Birnbaum, Robert. "The Cybernetic Institution: Toward an Integration of Governance Theories." Higher Education 18.2 (1989): 239-53. Web. [link]

Bogost, Ian. Alien Phenomenology, Or, What It's like to Be a Thing. Minneapolis: U of Minnesota, 2012. Print. [link]

Bryant, Levi R. The Democracy of Objects. Ann Arbor: Open Humanities, 2011. Print. [link]

Davies, Jim, Tomasz Janowski, Adegboyega Ojo, and Aadya Shukla. "Technological Foundations of Electronic Governance." Proceedings of the 1st International Conference on Theory and Practice of Electronic Governance (2007): 5-11. Web. [link]

Marston, Sallie A., John Paul Jones, and Keith Woodward. "Human Geography without Scale." Transactions of the Institute of British Geographers 30.4 (2005): 416-32. Web. [link]

Morton, Timothy. "Here Comes Everything: The Promise of Object-Oriented Ontology." Qui Parle 19.2 (2011): 163-90. Web. [link]

Rhodes, R. A. W. "The New Governance: Governing without Government." Political Studies 44.4 (1996): 652-67. Web. [link]

Shaw, Ian G R, and Katharine Meehan. "Force-full: Power, Politics and Object-oriented Philosophy." Area 45.2 (2013): 216-22. Web. [link]

Stokes, Paul A. "Organizational Cybernetics – The next Phase." Journal of Organisational Transformation & Social Change 8.1 (2011): 7-18. Web. [link]


1: link - Cramming more components onto integrated circuits

2: link - Android Wear's biggest update ever takes aim at the Apple Watch

3: link - IBM Reveals Proof of Concept for Blockchain-Powered Internet of Things

4: link - Parrot - Flower Power

5: link - IBM Reveals Proof of Concept for Blockchain-Powered Internet of Things

6: link - BuildingOS

7: I eschew the debate within artificial intelligence circles about the definition of this word. I mean it in the dictionary sense – awareness or notice of the environment around oneself.

8: link - Lidar

9: link - RFID

10: link - NFC

11: link - Foursquare

12: link - Samsung and Samsonite Join Forces to Develop Luggage of the Future

13: link - Nest

14: link - Tor Project

15: link - Gartner Says Worldwide Information Security Spending Will Grow Almost 8 Percent in 2014 as Organizations Become More Threat-Aware

16: link - FireEye First Cyber Security Company Awarded SAFETY Act Certifications by Department of Homeland Security

17: Cyber terrorism, at the time of writing, is not a defined term. It's a neologism attempting to combine terrorism, as we already understand it, with the internet – but using the internet as the attack platform instead of direct physical violence.

18: link - Denial-of-Service: The Estonian Cyberwar and Its Implications for U.S. National Security

19: link - Estonia Wants to Give Us All Digital ID Cards, Make Us "e-residents"

20: link - Falsehoods Programmers Believe About Names

21: link - ethereum

22: link - Synereo

23: link - Bryant, The Democracy of Objects, 2011.

24: link - Marston, Jones, Woodward, Human Geography Without Scale, 2005.

25: link - Morton, Here Comes Everything: The Promise of Object-Oriented Ontology, 2011.

26: link - Birnbaum, The Cybernetic Institution: Toward an Integration of Governance Theories, 1989.

27: link - Rhodes, The New Governance: Governing Without Government, 1996.

28: link - Stokes, Organizational Cybernetics – The next phase, 2011.