The Future (a conversation)

It all started so innocently:

Uh-oh, the Rise of the Machines Is for Realsies | VICE

www.vice.com

Rich Terrile, the NASA scientist who told us we were all Sims in a video game, fucked our brain folds again by explaining how human beings are outdated and will soon merge with machines to become a ro…

“Sam”: Interesting read.

“Frank”: you’ll probably like [this] then too:

https://www.youtube.com/watch?v=KQ35zNlyG-o

“Will”: I have conflicting feelings after watching this

“Frank”: do you want to elaborate?

“Will”: Though I thought the talk interesting and the speaker agreeable my reaction was one of frustration at the apparent but unidentified fatalism and amorality imbedded in his conclusions. Why should we all thoughtlessly accept the ongoing exponential march of technological advance? Especially when we see it so often backed and wielded by capitalist, authoritarian forces that implement them not to the advantage, but to the harm and destruction of humanity and the natural world. And when we can speculate on what kind of use advanced, super-human artificial intelligence would be to those behind the levers of power for maximizing profit, maintaining control of the populace, and exploiting the environment (which are all different sides of the same effort) should we not be incredibly weary to develop such intelligence? Should such a thought not sound repugnant to one aware of the catastrophes that endanger our world and out species?

In part, I see in this talk a manifestation of a fallacy of those enamored with technology and sometimes science more generally, which is to divorce it a priori from moral consideration, thinking their advances to be good in themselves regardless of their impact on conscious creatures. The absurdity of this view I feel like hardly needs to be argued for but if it did we should have to go back to some working definition of morality, which must be tied to the quality of being for those creatures that experience their being.

This talk also brings to mind a certain “everything is so fucked up and beyond repair that only the solutions that arrive as a result of exponential technological advance will stand a chance in successfully addressing the complexity and severity of the modern world’s problems” argument. But this assumes that that advance will be directed towards morally-preferable solutions when there is very little reason to believe as much when scientists refuse to recognize the moral relevancy of their work and when their research and experiments are often funded and their innovations implemented by corporate and/or government forces that have much more of an interest in profit, power, and production than well-being, resilience, truth, or beauty.

“Frank”: Here’s another one that I think has good context:

I think the gap left in the talk left by the A.I. researcher is for philosophers to fill. There is one A.I. specialist at least who thinks wars more devastating than we’ve ever seen will be fought over the power of “Artilects.” Bill Joy has that famous article about how the future doesn’t need us, and Ted Kaczynski expressed your precise concern about putting the technologies of the coming era in the hands of deeply stratified and unstable, and increasingly authoritarian, societies like in the U.S.

I echoed some of these concerns here:

https://thinkahol.wordpress.com/2011/09/28/there-is-an-alternative/

https://thinkahol.wordpress.com/2011/09/28/why-were-all-going-to-die/

It increasingly seems that without radical political change in the countries that dominate the world, especially the U.S., it’s likely that millions of people will probably die. Remind myself of Sut Jhally:

I also think that communication technologies are the most powerful; they are fundamental and implicitly democratizing. Gutenberg’s press presaged the nation-state as the internet has set the stage for systems more resilient, efficient and rational than current governing structures. In addition the end of the centralized electrical power and food will further undermine centralized political authority.

Steven Pinker describes the long term historical development of what we would/could call moral behavior. Statistically you and I are less likely to die at the hands of someone else than ever before in history.

We know more about what humans are than ever before in human history.

www.youtube.com/watch?v=mthDxnFXs9k

We know more about economics:

I could go on and on. We have so many solutions waiting in the wings. This seems inevitiable:

http://www.ourfuture.org/blog-entry/when-change-not-enough-seven-steps-revolution

I agree with your working definition of morality and think Sam Harris gives a good summary:

www.youtube.com/watch?v=Hj9oB4zpHww

I’m not sure what you mean when you talk about the fallacy of divorcing technological development from moral concerns. I think it’s a mistake that’s not really being made. Technological development is happening and it is a set of tools that humans that have values use. If anything these tools are becoming more widely accessible more quickly, accelerating democratization.

I’m just so ready for Us Now, Open Source Democracy, The Desktop Regulatory State etc.

http://watch.usnowfilm.com/

http://desktopregulatorystate.wordpress.com/

I don’t know. All I know is that we need to be having these conversations with more people more often. I’ve been trying anyway:

https://thinkahol.wordpress.com/2012/04/05/open-source-all-the-things-free-energy-and-free-information-for-a-free-people/

“Will”: First of all thank you for taking the time to compile these sources. Unfortunately my internet access limitations prohibit me from doing as much research as I would like, especially in video form as connections here are hit and miss and cannot always support Youtube streaming.

You say “Technological development is happening and it is a set of tools that humans that have values use.” That is, you seem to take technological development as given, a necessary condition that moral decisions then apply to after the fact, depending on how they are used by valuing human beings. I see no fundamental reason to accept that development as an already given or necessary condition. Technological development is a broad and dynamic process but still one composed of individual human actions and decisions, which are thus themselves subject to moral evaluation and judgment. A researcher, and the politician or corporate administrator or university board member that funds her research, IN ADDITION to those who use the fruits of that research are all making choices, are setting into action phenomena that will significantly impact the well-being of others and thus are, by our working definition, morally relevant choices. The chemists that developed Roundup are not immune to moral criticism by virtue of their being chemists or because they were only doing their job, just as Monsanto is not immune to moral criticism for pursing Roundup’s development even though by doing so they were only executing their legal corporate mandate to maximize their shareholder’s profits.

Perhaps what I’m trying to get at is that in today’s capitalist, globalized, and industrial society I have personally bared witness to and read far more about the destructive and alienating effects of our technological prowess and relatively (though not insignificantly) little about its potentially restorative, nourishing or democratizing properties. To specify a little, I have become increasingly convinced that technological development does not exist as a more or less independent broad contour of modern society but one intimately contextualized by the socio-politico-economic imperatives of pursuing profit, power, and material production and consumption. Unless these imperatives can be radically changed or abandoned I see little reason to believe in the potential of technological development to deliver us from their resulting pathologies. To the contrary, it has a long history of expediting them.

All that being said, I have no doubt that you are better informed than I on the potentially positive, especially democratizing abilities of more recent and advanced technologies and I would urge you to share with me your findings in this area. Unfortunately I cannot simply link the works of those who have most influenced my ideas about all this because those works are full-length books. The two that especially come to mind though are “The Unsettling of America: Culture and Agriculture” by Wendell Berry and “The Culture of Make Believe” by Derrick Jensen. I think you would find both immensely interesting and relevant to our present and ongoing inquiries.

“Frank”: Thank you for the book recommendations. I’ve been wanting to read both of them; glad to have good places to start.

I can’t go to websites like this without being amazed:

http://www.kurzweilai.net/

I think you’re largely spot on. But you say, “To specify a little, I have become increasingly convinced that technological development does not exist as a more or less independent broad contour of modern society but one intimately contextualized by the socio-politico-economic imperatives of pursuing profit, power, and material production and consumption.” Technology is certainly situated in larger contexts. We don’t have the same understanding of that context. I basically agree with everything you say, but I just have a little more hope about how technology will change the power of individuals vis-a-vis the use of technology by larger institutional structures.

Whether or not the internet remains open could be a determining factor. Does propaganda remain effective enough or does increased connectivity lead to an awake society, one no longer under hypnosis? Do you think science in general makes society better or worse? What’s the relationship between science and technology?

I do think that technological development is inevitable. And I don’t think that individual actors can’t be held morally responsible. The range of possible societal structures is a function of information processing power. Depending on how you want to define your terms I think this makes technology as fundamental as the societal contexts that flesh it out. There’s an interdependence I think. What will accelerating information processing power do? Which effects are stronger: strengthening the ability of the 1% to control us vs strengthening our ability to collaborate and throw off parasitism? My bet has been benefits accrue to the former first and to the latter over the long term. Will it be too late? Who knows.

I was just reminded in the past few weeks that I could be safely called a Singularitarian. That discussion is part of an even larger context.