TIMES, TIME, AND HALF A TIME. A HISTORY OF THE NEW MILLENNIUM.

Comments on a cultural reality between past and future.

This blog describes Metatime in the Posthuman experience, drawn from Sir Isaac Newton's secret work on the future end of times, a tract in which he described Histories of Things to Come. His hidden papers on the occult were auctioned to two private buyers in 1936 at Sotheby's, but were not available for public research until the 1990s.



Showing posts with label Positivism. Show all posts
Showing posts with label Positivism. Show all posts

Tuesday, October 22, 2019

What's Left Over? The Materialist Algorithm for Cognition


Image Source: Morten Tolboll.

This post continues my investigation of materialism and anti-materialism as competing responses to technology. My central argument is that right-wing and left-wing descriptions of politics and the economy are misleading and obsolete. Politics and economics are evolving to mirror tech-oriented materialism and anti-materialism. The worst of the former is leading to tyrannical political oppression. The worst of the latter is leading to an alienation from the mainstream consensus about reality.

Thus far in this blog series, I have focussed on materialism as a way of seeing the world, which is grounded in empiricism, scientific exploration, rationalism, secularism, and the associated economic mode of capitalist consumption. All of these aspects concentrate on humankind's five senses and how they can measure and experience the physical realm.

More radical forms of materialism reveal where this is stance is headed. In 2013, Stephen J. Cowley and Frédéric Vallée-Tourangeau edited a collected volume of scholarly essays entitled, Cognition beyond the Brain: Computation, Interactivity and Human Artifice. The editors explain that the notion that we are free to think inside our heads is a fairy tale. They argue that the thoughts we have inside our own heads as private expressions of personal existence, and as ego-controlled responses to the outside world, constitute a bedtime story we tell ourselves about our independence as individual beings.

'Thinking,' for these academics, is not an internalized activity expressing the cognitive power and freedom of a single, rational creature. Rather, as I suggested in my post, Who Writes Your Reality?, 'thinking' is a culturally-modulated experience. It is even possibly constructed from the outside in. That is, your brain from this materialist standpoint is like a Tabula rasa, upon which the outside world may write its programs as it wishes. The editors of Cognition beyond the Brain call old-fashioned notions of subjective 'thinking' a 'folk concept,' a culturally-shaped story we tell ourselves about what we are doing:
"Like all folk concepts, ‘thinking’ is a second-order construct used to ‘explain’ observations or, specifically, how action is—and should be—integrated with perception."
This is a radical departure from the earlier Postmodern deification of the subjective mind, wherein social objectivities were demolished and everyone's personal truth was considered sacrosanct.

Thursday, June 6, 2019

What's Left Over? The Rationalist-Materialists


A quotation from the 2014 collection The Blooming of Madness 51, by Florida poet Christopher Poindexter. Image Source: pinterest.

A simple way to understand the philosophical crisis raised by technology is to ask yourself the question: 'What's left over?' This is a shorthand I devised, and partly borrowed from the sci-fi writer, Philip K. Dick (1928-1982).

Dick predicted the impact of simulated realities on our consciousness. Aware that simulations would soon be indistinguishable from organic beings and authentic objects, he kept trying to hit bedrock and finally concluded in his 1980 short story, I Hope I Shall Arrive Soon: "Reality is that which, when you stop believing in it, doesn't go away." This can be a maxim for testing your own views and those of others regarding the mind and its relationship to reality, especially when it comes to the meaning of creation (whether one sees God as Creator or humans as creators) and created objects like technology.

My previous post introduced the hypothesis that how people view technology may be grounded in rationalist-materialism, materialism, or in anti-materialism. Today, I will start with the rationalist-materialists; two subsequent posts will discuss the materialists and the anti-materialists.

To define what I mean by those systems of thought, I asked 'What's left over?' after one removes complex narratives and beliefs about reality in each case. That is, what core attitudes are we talking about when everything else is stripped away? My answers vastly oversimplify different philosophies about the mind and matter, and avoid formal academic definitions; nevertheless, I hope they will clarify our current conundrum as technological simulacra become harder to control.

Tuesday, June 4, 2019

Believe Them or Not: The Red-Pilled Anti-Materialists


Notre-Dame Cathedral burned in Paris on 15 April 2019. Image Source: Ian Langsdon, EPA-EFE via MSN.

There are people on the Internet who are so red-pilled that they will shock today's greatest skeptics. They are more red-pilled than your average Black Pill. They are profoundly alienated from the mainstream and most would call them crazy and paranoid. Yet these rogues are significant because they could develop a future mainstream paradigm, if it isn't suppressed first. Red Pill leaders herald an online movement which will change the way we understand everything.

Red Pills are sometimes associated with right wing politics, but are not mainstream conservatives. In 2015, The Telegraph dismissed red-pilled rumours that late Tory PM of the UK, Edward Heath, was a child abuser. Image Source: The Telegraph.

What do they stand for? The mainstream condemns Red Pills as anti-social and anti-political: hackers, bloggers, vloggers, anarchists, libertarians, conservatives, the deplorables, the downtrodden, the impoverished, the religious, the deluded, the uneducated, the unintelligent. But are these anti-rationalists really anti-rational? Are they stupid, uncultured consumers of 'Fake News' and Deep Fakes, who will ruin democracy with their racist populist fascism? I would argue that scaremongering and stereotyping obscure the trend. What is actually unfolding is a battle between materialism (the mainstream) and anti-materialism (the red-pilled Underground). 

It is odd that Oxford, a hub in the world's power structure, is also producing leaders of the Underground. Two such latter figures are Dr. Joseph P. Farrell and Dr. Katherine Horton. Highly intelligent, they produce nuanced conspiracy theories that are as deeply researched as they are terrifying. If you look at them as advanced anti-materialists rather than as conspiracy theorists, their Oxford background makes more sense.

That said, their views are so incredible that they could be paid agents of disinformation, dispatched by the establishment to discredit Internet dissenters. And if not Farrell and Horton, then others with similar fringe opinions may be controlled opposition.

My post here offers only opening remarks on the shift which Horton and Farrell represent, to be analyzed in future posts. My aim in discussing these matters is not to support or deride Red Pills or their opponents. Rather, I mean to analyze the mainstream as a materialist movement and the Underground in terms of anti-materialism, without being sidetracked by inflammatory language which distorts how we usually consider these questions.

Stop 007 - Who I am (7 December 2016). Video Source: Youtube.

Wednesday, January 4, 2017

Big Data's Strategic Inflection Point


Image Source: RNZ.

Collection, surveillance, analysis, prediction: there are reasons why the battle between freedom and slavery will take place on the Internet. Only in the past decade did big data enter the headlines, because the necessary hardware and storage capacity became affordable for corporations. In addition, governmental and corporate data crunching capability improved to enable what panelists at Financier Worldwide call, "curation ... of enormous data sets" and "the ability to predict when a certain business-contextual event is about to happen, and then to adjust accordingly in an automated fashion."



Few people read the fine print when they sign up for social media accounts, so they do not understand how others now own their personal identities and seek to decide their fates. Nor do they understand how the Internet of Things forms a network of physical objects around them to glean and mobilize information. From Radio New Zealand:
"I was on Facebook recently and I realised they were showing me a photo that wasn't already on my newsfeed and that I wasn't even tagged in, that had come from my camera roll."
In 2016, Edward Snowden stated that surveillance was about "social control," not terrorism. Certainly, companies such as Oracle (cloud database management), LexisNexis (legal and business risk management services), and Micron (semi-conductor solutions) confirm Snowden's narrative (see my earlier posts on this topic here, here, here and here).

Image Source: Sputnik International.

counter-surveillance movement arose to combat government and corporate intrusion. A talk from the 2016 hackers' Chaos Communication Congress in Hamburg, Germany describes the problem:
"Today virtually everything we do is monitored in some way. The collection, analysis and utilization of digital information about our clicks, swipes, likes, purchases, movements, behaviors and interests have become part of everyday life. While individuals become increasingly transparent, companies take control of the recorded data."
Mozilla, developers of the Firefox browser, developed Lightbeam so you can see who is tracking you while you browse. Privacy Lab has made available online a 2016 book by Wolfie Christl and Sarah Spiekermann: Networks of Control: A Report on Corporate Surveillance, Digital Tracking, Big Data and Privacy (Hat tip and thanks: Janine Römer). The book explains how social control through big data actually works, and it is far more evil, insidious and Darwinian than one would imagine, because algorithms target individuals' socio-economic performance in life to create new kinds of discrimination. When you state what you are doing or thinking on Facebook or Twitter, when you surf the Web, when you buy things, travel, or read certain news stories, you are letting the world know how successful you are or are not, by other people's mechanized standards:
"Today, a vast landscape of partially interlinked databases has emerged which serve to characterize each one of us. Whenever we use our smartphone, a laptop, an ATM or credit card, or our ‘smart’ TV sets detailed information is transmitted about our behaviors and movements to servers, which might be located at the other end of the world. A rapidly growing number of our interactions is monitored, analyzed and assessed by a network of machines and software algorithms that are operated by companies we have rarely ever heard of. Without our knowledge and hardly with our effectively informed consent, our individual strengths and weaknesses, interests, preferences, miseries, fortunes, illnesses, successes, secrets and – most importantly – purchasing power are surveyed. If we don’t score well, we are not treated as equal to our better peers. We are categorized, excluded and sometimes invisibly observed by an obscure network of machines for potential misconduct and without having any control over such practices.

While the media and special interest groups are aware of these developments for a while now, we believe that the full degree and scale of personal data collection, use and – in particular – abuse has not been scrutinized closely enough. This is the gap we want to close with the study presented in this book."
Corporate surveillance, digital tracking, big data and privacy: How thousands of companies are profiling, categorizing, rating and affecting the lives of billions. Talk by Wolfie Christl at CCC Congress (30 December 2016). Video Source: CCC-TV. Hat tip and thanks: Janine Römer.

Thus, the debate around big data focuses on post-2013, post-Snowden ideas: privacy or anonymity; predictive marketing; social control; totalitarianism. Yet Utopia or Dystopia recognizes that big data are so superhuman in quantity that they blur reality:
"Big Data; does it actually provide us with a useful map of reality, or instead drown us in mostly useless information? ... [D]oes Big Data actually make us safer? ... [H]ow is the truth to survive in a world where seemingly any organization or person can create their own version of reality. Doesn’t the lack of transparency by corporations or the government give rise to all sorts of conspiracy theories in such an atmosphere, and isn’t it ultimately futile ... for corporations and governments to try to shape all these newly enabled voices to its liking through spin and propaganda?"
Instead of big data driving fears of exploitation and totalitarianism, this concern revives far older contests between rationality and the unknowable.


Bodies of big data are so big that they become a kind of big mind, a combined collective consciousness and collective unconscious. To account for virtual reality by known means is impossible. Academic history as we knew it, 15 years ago, cannot now be written according to traditional methods and new methods must be developed. The body of data is: (a) too vast to be processed by a human; (b) unfixed: potentially subject to infinite alteration; and (c) stored in languages and on devices which rapidly become obsolete.

The same goes for the social sciences. Try to analyze the online kekkism in the recent American election and be prepared to confront something akin to magic which will defy current theories. The great modern experiment to rationalize the world breaks down in the face of anti-rationality, hacking, and Underground cryptics, whether by anonymity and encryption, or by mysterious forms of communication, behaviour and awareness, which will surpass knowledge and understanding. Big data erode reality, and this is why the ISIS publicity bureau and magazine can promote an apocalyptic eschatology unironically in this day and age. When you are operating in an environment where X zillion bits of data are being created every second, an apocalypse seems appropriate to some, and makes more sense.

Digital Book World recently weighed the pros and cons of big data. Mathematician Cathy O'Neil - who joked that her New Year's resolutions included the plan to gain 10 pounds and start smoking, and who wrote the 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy - warned Digital Book World that algorithms are not the rational tools they seem to be. Instead, algorithms are artifacts, the techno-dynamic features of which are correlated against human aspects, such as sales numbers and social media traction. As algorithms manipulate big data to locate desired human results, they become, in O'Neil's estimation, new kinds of laws:
"When it comes to human activities, algorithms are expected to be models of objectivity, owing to their basis in mathematical formulae and reliance on enormous quantities of measured facts about a given general population, whether students or teachers, job applicants or criminal defendants. Cathy O’Neil makes the case that real-world mathematical models are anything but objective. ... [S]he asserts that big data WMDs are opaque, unaccountable and destructive and that they essentially act as unwritten and unpublished secret laws."
Despite these warnings, on 22 August 2016, Digital Book World remained optimistic about what the Panama Papers can tell us about deep learning. The lesson is not about offshore accounts, corruption, and a meshed network of legitimate and illegitimate interests spanning the globe. The Panama Papers show, according to DBW, that big data are a gold mine for profit, right at something called big data's strategic inflection point:
"[The Panama Papers] should ... serve as a stark reminder of the hidden value sitting locked in large amounts of unstructured data, such as notes, documents and emails.

In recent years, we’ve seen businesses in many industries solve the puzzle of big data and begin to extract the insights that can accelerate innovation and grow revenue. Healthcare, finance and retail are three that immediately come to mind that are at the forefront of using big data. But that is only the beginning.

Consider this: 90 percent of the world’s data only came into existence in the last two years. With more of our lives moving online and into the cloud, this remarkable growth of data will only accelerate, offering enormous possibilities to the businesses that can navigate these massive data collections.

The Panama Papers are a roadmap. It is now possible to collect and analyze data faster than ever before through the use of unparalleled computing power and machine learning methods, such as deep learning. Unstructured data, such as the text in the posts and messages of social media that most of the world uses, emails that were leaked or subpoenaed, laboratory notes or technical documentation, represent a massive opportunity for businesses that can harness it. ...

Andy Grove, retired CEO of Intel Corp., calls this moment in potential growth a 'strategic inflection point' — the point at which two major pathways temporarily coincide — between doing business as usual, or embracing and adapting to the new."
Digital Marketing Transit Map (25 June 2013). Click to enlarge. Image Source: Gartner.

Wednesday, May 4, 2016

Time and Politics 19: Predicting the Future, A Tricky Business


Professor Bruce Bueno de Mesquita spoke at LSE (Sheikh Zayed Theatre, New Academic Building) on 21 October 2009; the chair of the discussion was Professor Richard Steinberg. Video Source: Youtube.

Caption for the above video: "Bruce Bueno de Mesquita has been shaking the world of political science to its foundations with his predictions of world events. His systems based on game theory have an astonishing 90%+ ratio of accuracy and are frequently used to shape US foreign-policy decisions on issues such as the terrorist threat to America to the peace process in Northern Ireland. Considered by many to be the most important foreign-policy analyst there is, it is no surprise that he is regularly consulted by the CIA and US Department of Defence. In this lecture Professor Bueno de Mesquita will look at what is needed to reliably anticipate and even alter events in any situation involving negotiation in the shadow of the threat of coercion. He will demonstrate how to bring science to decision making in any situation from personal to professional."

In an earlier post, I discussed the work of NYU professor Bruce Bueno de Mesquita, who uses game theory to predict the future in international relations and advises businesses and the American government on major strategic policy concerns. In the 2009 talk above from the London School of Economics (at 53:00), he explained how he was hired to develop a fraud prediction model applied to banking regulations. At 55:32, he remarked:
"One of the best early warning indicators of fraud is that, relative to growth in market cap, compensation for senior management is under expectation for the size of the organization of the firm, not over. ... They're husbanding whatever resources they can to try to save the company."
Bueno de Mesquita has become widely known for his predictions on war, the economy and politics, although he remains dogged by popular fringe elements who compare his work to mystical prognosticators and fortune-tellers of the past, such as the Renaissance apothecary Nostradamus (1503-1566). In this lecture, he maintained a serious academic attitude while promoting his book, The Predictioneer's Game: Using the Logic of Brazen Self-Interest to See and Shape the Future. He has since published a book on how leaders exercise power (2012); and he issued a new edition of his book on war and peace in international politics (2013). He need not worry about becoming too popular; one Youtuber was unimpressed:
"The fiction that human beings are 'rational actors' has been totally discredited. Establishment academics with tenure have a hard time accepting real [world] facts that most people on the street intuitively understand without ... study. Anyone who claims 90% prediction accuracy in working with any complex system - who has not made themselves a billionaire with such gifts - is a con man."
Two questions put to Bueno de Mesquita at the end of his talk suggested that the Youtuber's remark had some weight. Random events, as well as anti-rational or irrational impulses, fall outside the professor's model. This is the 10 per cent range of human behaviour extending into the future, where game theory meets chaos theory meets randomness.

At 56:40, a member of the audience asked about Nassim Taleb's The Black Swan: The Impact of the Highly Improbable (2007; 2nd ed. 2010). To paraphrase, the person asked if random, chance occurrences could drastically affect statistical models (Bueno de Mesquita corrected him: game theory models) due to unique expressions of human nature.

At 1:10:00, another member of the audience asked about Frederic Vester's (1925-2003) sensitivity model, which applied game theory to biology and behavioural ecology, and produces results similar Bueno de Mesquita's application of game theory to economics and politics. This similarity implies that organic systems mirror human systems in their predictability and unpredictability.

Both questions relate to the nature and impact of the Internet, especially as it is redesigned to endure and become a lasting edifice. Is the Web a techno-organic entity which reflects the rational and irrational impulses of its users, and to what degree? Can we describe peer-to-peer technological environments as 'ecosystems,' and if they are organic, to what degree are they chaotic and unpredictable? Or are the computer systems and technical designs of the Web and other peer-to-peer technologies, including cryptocurrencies, shaping the way we behave and think inside virtual realities? Are we driving the car or is the car driving us? This is a concern as Big Data analysts flock to predict, manipulate and control consumers' behaviours and voters' choices. In future posts, I will consider how these theories of predictability relate to decentralized behavioural psychology and the psychodynamics of peer-to-peer technologies.

See all my posts on Time and Politics.
See all my posts on Cryptocurrencies.
See all my posts on the Permanent Web.

Tuesday, April 5, 2016

Awaken the Amnesiacs 6: Mona Lisa's Trump Card


The famous Mona Lisa or La Gioconda (1503-1506), 'lady of light' or 'light-heartedness'; Lisa sits between two columns, with only their bases barely visible. Image Source: Wiki.

In an earlier post, I argued that scientists and technologists ironically inspire the primal and anti-rational because they are transforming life, breaching boundaries, and not always weighing long term consequences of their innovations. To understand that process, one must analyze it with ideas from the arts and humanities. With regard to the impact of the Internet, part of the answer comes from visual artists, who are preoccupied with how we see the world and how the world sees us. In my previous post in this series, I discussed Gerhard Richter's mirror paintings and their resemblance to computers as mirrors.

Perhaps the most famous symbolic depiction of the mirror looking at us is Leonardo da Vinci's Mona Lisa (1503-1506). Mona Lisa is smiling so mysteriously because the painting may not be about its enigmatic subject, Lisa del Giocondo (née Gheradini), at all. The symbolism in the Mona Lisa indicates that the portrait represents an archetypal mirror, which is actively watching you. Understand the Mona Lisa, and one starts to understand our present circumstances on the Internet. The next few posts in this series describe how the symbolism of the Mona Lisa provides clues to our Millennial mentality. Given the uproar over Donald Trump's presidential candidacy, it is fitting that today's post also explains the meaning of the word 'trump' in Renaissance card games, and it discusses why the Mona Lisa depicts a trump card and concept.

For a taste of medieval walled town life from Leonardo da Vinci's time, this is Pérouges, France, built in the 14th and 15th centuries around wine and weaving industries in the Ain River Valley, near Geneva; it is a seven hour drive by car to Florence, Italy. Video Source: Youtube.

The medieval town of Gradara is known for a castle which was finished in the 15th century, and would have been new in Leonardo da Vinci's youth. The castle features in the fifth canto of Dante's Divine Comedy, at the climax of the adulterous love story between Francesca da Rimini and Paolo Malatesta. Video Source: Youtube.

Florence in da Vinci's time, in a 1493 woodcut from Hartmann Schedel's Nuremberg Chronicle. Image Source: Wiki.










Interiors of Palazzo Davanzati, a restored medieval-Renaissance Florentine palace, built in the late 14th century. The palace reveals a claustrophobic, walls-within-walls mentality, with everything being enclosed: towns, compounds, houses, inner houses, locked rooms, hidden chambers, and secret passages. Inhabitants sought ever greater security from outside conflicts, which became more elaborate and complex. Images Source: Walks Inside Italy and Sailko/Wiki and Museums in Florence. 

Da Vinci painted the Mona Lisa at the turn of the 15th-to-16th centuries during the transition from the Middle Ages to the Renaissance. The Mona Lisa contains triumphal allegorical symbolism which was very popular at the time. These allegories were everywhere. They were a cultural shorthand for a whole range of accepted ideas about the way the world worked. At this time, noble families and guilds presided over life inside walled towns. Constantly in conflict to amass power and consolidate control, they revived the old Roman tradition of triumphal processions to celebrate victories in battles. The Renaissance, according to Joseph Manca, was "the age of the trionfo." Parades took on symbolic qualities to enable noble families to assert their historical continuity with the greatness of imperial Rome.

Triumphal Victory Parades

Thus, 'triumphs' were parades, which became associated in the late Middle Ages and early Renaissance with spiritual allegories. This transition from a real military victory march to a symbolic parade to celebrate certain social values is evident in Francesco Petrarch's poem, I Trionfi (1356-1374), written mainly at the Visconti Court in Milan. Petrarch's love poems describe his unrequited love for Laura de Noves (1310-1348). In I Trionfi, Petrarch (1304-1374) claims his love for Laura made him face ever more demanding physical, emotional, philosophical, and spiritual challenges. At each stage, a higher virtue or stronger allegorical figure triumphed and held a victory march. Peter Sadlon:
"In the first triumph, Love as Cupid conquers the gods and men (including Petrarch). In the second triumph, Chastity defeats Love, reflecting Laura's ladylike rejection of Petrarch's advances. In the third triumph, Death defeats Chastity (Laura was a victim of the Black Death). In the fourth, Fame defeats Death (her reputation lives after her). In the fifth triumph, Time defeats Fame, and finally (sixth), Eternity conquers Time (with the promise that Petrarch and the object of his love will be united at last in the afterlife)."
The poem, in Italian and English, is here. The victories of ever-higher allegorical figures are depicted in the illustrations below.

The triumph of Love.

The victory of Chastity.

The march of Death.

The parade of Fame.

The procession of Time.

The victory march of Eternity. Images Source: Peter Sadlon. ("The images shown here are from Bernard Quaritch's edition of Works of The Italian Engravers of the Fifteenth Century, with introduction by G. W. Reid. Reid denies the credit for the Petrarch prints to Nicoletto da Modena and supports the authorship of Fra Filippo Lippi.")

Tarot Card Trumps

Triumphant allegorical figures, or 'trumps,' were then included in the invention of the tarot deck, a kind of Game of Thrones card game for nobles. The earliest tarot cards look a lot like medieval illuminated manuscripts, but were adapted to woodblock printing, introduced in the 15th century. The Visconti di Modrone deck of tarot cards, which is officially dated around 1466, but may date from the 1440s, is one of the most prized possessions of Yale University's library.

The first established tarot card decks were created in the 1400s through the early 16th century. The Florentine Minchiate deck of 97 cards, developed in the early 1500s when Leonardo da Vinci lived in the city, was used to play a game with a catalogue of hermetic archetypes. 'Minchiate' means 'nonsense' or 'bullshit'; so this was a 'fool's game,' a bit like chess, and a bit like early poker, with some allegorical lessons, astronomical archetypes, and fabulistic morals thrown in for good measure. The World of Playing Cards:
"The game, like other Tarot games, is a trick taking game in which points are scored by capturing certain cards and sets of cards. However, the deck has also been popular with card readers who see it as a variant of the esoteric tarot because of the allegorical and symbolical content. The Cavaliers [knights or jacks] are man/beast creatures. The Valets (or Pages) are male for clubs and swords, and female for cups and coins. Further features include the replacement of the Papess, Empress and Pope by the Western Emperor, the Eastern Emperor and the addition of the Grand Duke. Some scholars believe that these cards may have served as teaching aids, because several trump allegories (Virtues, Elements, Zodiac signs) belong to categories upon which classical learning was based at that time."

This is a 1995 Lo Scarabeo limited 'Etruria' edition reproduction of a 1725 version of the Florentine Minchiate tarot deck. There was also a 1996 mass-produced deck and a 2011 reprint. According to Tarot Heritage, the first mention of a 'tarot' deck, comes from a 1440 Florentine diary. Video Source: Youtube.

Friday, February 26, 2016

Awaken the Amnesiacs 4: The New Millennium's Gothic Moment


BBC Four's show, The Art of Gothic: Britain's Midnight Hour (6 November 2014) explained how the 18th and 19th century explosion of science and industry inspired a Gothic counter-movement, a critical moral debate on the implications of unbridled rationalism. The BBC show highlighted the English painting, An Experiment on a Bird in the Air Pump (1768) by Joseph Wright of Derby (1734-1797), which portrayed the Gothic fear of scientists' experiments. Rationalists' destruction of spiritual concerns created horror. In the painting, the scientist is slowly pumping air out of a bell jar, in which a bird (symbolizing the Holy Spirit) is trapped. The scientist is suffocating the bird to demonstrate its dependence on oxygen. Image Source: Wiki.

The Awaken the Amnesiacs series on this blog explains why and how the human interaction with high technology is taking on spiritual dimensions. In today's post, I discuss the Gothic moment at which undue rationalism carries within itself the seeds of its own undoing. The rational, when overindulged, becomes anti-rational.

Any undertaking, done in the name of 'cutting edge change' will involve a confident, progressive agent. It is easy to criticize our forebears for their blind spots, and more difficult to see our own. In an earlier post, The Night of First Ages, I quoted an adaptation of Joseph Conrad's Heart of Darkness (1899) in the 2005 King Kong screenplay. The characters in King Kong are on a voyage to make a movie on a remote island. On the way, Jimmy, the ship's boy, reads Heart of Darkness, narrated by Conrad's protagonist, Charles Marlow. Marlow is on a journey to find an ivory trader, Kurtz, on the Congo River. Jimmy asks: "Why does Marlow keep going up the river? Why doesn't he turn back?"

The Heart of Darkness scene from King Kong (2005) © Universal Pictures depicts the wall between ego and id, or between the conscious-rational and unconscious-anti-rational parts of the human mind. Reproduced under Fair Use. Video Source: Youtube.

The ship's first mate remarks that Marlow keeps searching for Kurtz, without realizing how deep he is getting into the dark side of human nature, because Marlow believes he is civilized. 'Civilized' characters like Marlow and Kurtz are amnesiacs, who think their own savagery is no longer a threat, something from a long lost, bygone era of sticks and stones. In their hubris, they unconsciously become more savage as they push forward as self-appointed bearers of 'progress': "We could not understand because we were too far ... and could not remember ... because we were traveling in the night of first ages ... of those ages that are gone ... leaving hardly a sign, and no memories. We are accustomed to look ... upon the shackled form of a conquered monster ... but there ... there you could look at a thing monstrous and free."

Jimmy realizes, "It's not an adventure story ... is it Mr. Hayes?" To which the first mate responds, "No Jimmy, it's not." The nested novel-to-movie-to-film metafiction in King Kong should be a message to its audience; as is the metahistorical fact that Heart of Darkness was based on a true story and the character Kurtz was based on a real person. The metafiction and metahistory of Heart of Darkness, embedded inside King Kong, reveal our amnesia. In blindly pursuing the singularity, why don't we turn back? Why don't we see that the history of the new Millennium is not an adventure story? It is because we expect the monster inside ourselves to be shackled. On the Internet and in research labs, the monster is not shackled.

Scientists and technologists have reached a Gothic moment because there is a gap between their practice and the way they are perceived in mass media as progressive actors. When they work with the scientific method, they live with uncertainty. They test hypotheses which, if proven, are accepted until falsified or refined. At the same time, we live in a period when a cult of secular rationalism has supplanted mass religions to furnish the prevailing story of global civilization. Scientific method and rationality are equated with humanism, enlightenment, advanced education, and hyper-progress. Scientists and technologists occupy exalted social positions as perceived experts. In this capacity, they are less cautious. They are little aware that when they become public gurus or market their findings with mythical labels, they tap into that part of secular rationalism that functions like a religion, rather than a considered quantification of reality.

Despite recent triumphs and headlines, there are signs of amnesia among today's scientists, technologists, and technophiles. They press ahead as experts and progressive actors, even when their impact on society starts to become surreal, or when their followers become cultish. They do not stop to reconsider their position, even when, as I put it in this post, "a nearly-unstoppable faith in, and optimism about, rampant technology" gives rise to "a heart-tearing soul-sickness which emerges from that intermingling of the virtual and the real."

Scientists are frank about how much they do not and cannot know. The Guardian: "It is perhaps a sign of the health of modern science that the harbingers of so much doubt have met with such acclaim." The current situation is serious: physicists have reached the analytical limits of scientific inquiry for two reasons. They discovered that they can only observe and measure the tiny part of the universe which absorbs light radiation. When they do measure that tiny portion, they have confirmed that they change it at the sub-atomic level. We can only see a tiny portion of reality, and we change that reality when we look at it. Together, these issues trap us in a self-referential bubble of perception.

When physicists determined that 96 per cent of the universe is unobservable and exists in the forms of dark matter and dark energy, scientists at CERN and other labs set out to breach those limits. Particle physicists, who deal with measurable knowns, stand at the edge of the methodological line, with a high point being their 4 July 2012 discovery of the Higgs Boson or 'God particle.' In 2012, Russia Today interviewed Aleksey Filippenko, an astrophysicist and Professor of Astronomy at the University of California, Berkeley, who admitted that the 'God particle' raised more questions than answers:
"Let me start by saying that I am going to discuss the universe only from the perspective of a scientist, from an intellectual perspective. I am not going to be talking about whether there is spiritual God or a personal God or a purpose to the universe – these are questions that scientists can’t address. My own belief is that once you have the laws of physics the universe just keeps going on its own. And it could even be that the laws of physics are all that you need in order to get the universe to start from the very beginning – the “Big Bang”. ...

The Higgs boson helps to complete what is called the Standard Model of particle physics. There is a way we have to try to understand – electrons and quarks and neutrino and other kinds of particles. And Higgs boson was kind of a missing piece of the puzzle. Which, if it were not there, would mean that we would have to kind of start over. But the fact that it appears to have been found completes our picture of the Standard Model of particle physics. That is not to say that we understand everything. We don’t yet understand how gravity fits in with particle physics. Other than the fact that gravity pulls particles together. We also do not understand things like dark energy. The universe seems to be filled with a dark energy that is expanding the universe faster and faster – I helped to discover that. And the 2011 Nobel Prize in physics was given to the team leaders last year for that discovery.

So, we don’t understand the dark energy. There is also something called dark matter. It may or may not be some kind of fundamental particles that could be part of the Standard Model – we don’t yet understand. The Higgs boson is a very important discovery. But it does not solve all the questions that remain in physics. But it is a very important discovery. In a sense, it would have been more exciting as a scientist to me if it were not there because it would mean that we were not correct in our view of the universe. The surprises are more fun than the expected discoveries. ...

I don’t think scientists will ever truly understand creation because I don’t think we will know where the laws of physics came from. But given a universe, given a universe can arise I think some day we may well understand dark energy and dark matter and the other constituents of the universe. We only discovered dark energy 14 years ago – the accelerating expansion of the universe. So it is no surprise that we don’t yet fully understand dark energy. Dark matter was only conceived a few decades ago. So again, we don’t yet fully know what dark matter is. But we have not been investigating it for very long. I mean, in hundreds of years who knows what we will know. We might have a full inventory of what is in the universe and how everything behaves. So we will know a lot. But we won’t quite know why it all happened and why there is something other than nothing.

Why are there any mathematical laws of physics rather than just nothing at all? I don’t know whether we will ever understand that. Scientists are only well-aware of 4 per cent of the universe – that is, we understand pretty well the nature of 4 per cent of the universe. The stuff that is made of atoms. Ninety-six per cent of the universe is made out of dark matter and dark energy. And although we know they are present we don’t know what their detailed properties are or why they are there. Or what exactly is going on."
On the other side of the line stand theoretical physicists, who deal with unmeasurable unknowns using mathematics. Astrophysicists stand, somewhat unhappily, on both sides of the line. A 2011 book by Richard Panek, The 4 Per Cent Universe, emphasized that scientific measurements begin to break down at dark energy and dark matter. The conventional wisdom is that as discoveries, knowledge, and tools improve, the scientific method will expand and continue. But this underestimates the problem of scientific methodological analyses - and for researchers in all disciplines who use them. It is not just a question of having insufficient tools to measure and quantify reality. It is a question of not being able to comprehend the findings. The Smithsonian: "'We have a complete inventory of the universe,' Sean Carroll, a California Institute of Technology cosmologist, has said, 'and it makes no sense.'"

Apollo 18 (2011) faux found footage movie explained why 'we've never gone back to the moon.' The film was a huge box office hit. The real reasons for canceled Apollo missions were political, technical and funding challenges. Image Source: Movie Blogger.

Just as physicists hit a wall, big science stumbled elsewhere as well. In one generation, the space age promised and failed to produce space station cities, moon pod villages, and colonists on Mars. Lunar settlements remain technical concepts, and China's 2013 landerYutu, made the first soft landing on the moon since 1976. On the Internet, lunar exploration has become the dismal stuff of conspiracy theory and cinematic legend. Nor did the atomic age solve the energy crisis, or bring us cold fusion. Instead, it vomited up the radioactive fallout of nuclear disasters and inexplicable dark matter. Geneticists were supposed to cure cancer and the common cold, not produce human-animal hybrid chimeras which scare the public. These generalizations do not account for the realities of research and funding; but they explain why mass sympathy and confidence in big science waned over the past generation.

Another day at Boston Dynamics. Image Source: RAND Corporation.

Where big science stumbled, big tech was supposed to bail us out. In the public mind, if not in reality, the torch passed in the 1990s from big science to big technology. Over the past fifteen years, interest shifted from space exploration and cosmology - to computers, gadgets and the Internet. Technologists promised transhumanism, posthumanism, artificial intelligence, and the Singularity. This was why 'singularity' became the evangelical buzzword of technophiles between 2003 and 2012, and remains fashionable with its own cluster of personalities. Silicon Valley became one of the most powerful places on earth. High tech would launch us exponentially toward a gnostic, mind-opening, theophanic moment of transcendence.

Enter the computer programmers, designers and engineers. We would remake ourselves on the clock, rework our societies and the whole world, and finally efficiently manage resources. The Internet, conceived by the scientists at CERN, was rationalistic in its construction. Unfortunately, it is anti-rational in its execution; it exploits users' unconscious impulses and forms a giant collective mind. We did not get a robot-supported Valhalla. Instead, we got 9-million-hit Roomba cat videos, cyber-bullies, social-media-supported home invasions, remote-controlled brain-to-brain interfaces, and Boston Dynamics cheerfully preparing its Second Variety military hardware for World War III. The technological revolution began to give way to the surveillance revolution.

Thursday, July 16, 2015

Photo of the Day: Bathroom Subjectivity


Photo "found inside a stall at a gas station in Virginia." Image Source: Facebook.

You can read Bertrand Russell, Problems of Philosophy (1912), chapter two, for free online here. According to Russell, the original problem from Decartes' "I think, therefore I am" confronts the distinction between subjective and objective, rather fitting for a bathroom wall in Virginia, or anywhere else for that matter:
"'I think, therefore I am,' he said (Cogito, ergo sum); and on the basis of this certainty he set to work to build up again the world of knowledge which his doubt had laid in ruins. By inventing the method of doubt, and by showing that subjective things are the most certain, Descartes performed a great service to philosophy, and one which makes him still useful to all students of the subject.

But some care is needed in using Descartes' argument. 'I think, therefore I am' says rather more than is strictly certain. It might seem as though we were quite sure of being the same person to-day as we were yesterday, and this is no doubt true in some sense. But the real Self is as hard to arrive at as the real table, and does not seem to have that absolute, convincing certainty that belongs to particular experiences. When I look at my table and see a certain brown colour, what is quite certain at once is not 'I am seeing a brown colour', but rather, 'a brown colour is being seen'. This of course involves something (or somebody) which (or who) sees the brown colour; but it does not of itself involve that more or less permanent person whom we call 'I'. So far as immediate certainty goes, it might be that the something which sees the brown colour is quite momentary, and not the same as the something which has some different experience the next moment.

Thus it is our particular thoughts and feelings that have primitive certainty. And this applies to dreams and hallucinations as well as to normal perceptions: when we dream or see a ghost, we certainly do have the sensations we think we have, but for various reasons it is held that no physical object corresponds to these sensations. Thus the certainty of our knowledge of our own experiences does not have to be limited in any way to allow for exceptional cases. Here, therefore, we have, for what it is worth, a solid basis from which to begin our pursuit of knowledge. The problem we have to consider is this: Granted that we are certain of our own sense-data, have we any reason for regarding them as signs of the existence of something else, which we can call the physical object?"
Philosophical discussions on how to verify truth and reality have never been more relevant than they are now, with malleable truth dominating the Internet and a growing confusion about the distinction between virtual reality and everyday 'real' reality.

Friday, February 20, 2015

The Lunar New Year's Conformity and Gnostic Alienation


Gnostic symbols. Image Source: MISTÉRIOS DE RENES-LE-CHÂTEAU.

How are the astrological symbols and messages of Asia's Lunar New Year relevant to Millennial life? The Year of the Goat or Sheep encourages conformity and pacifism, conciliation and acceptance of authority, but allows for creativity and a healing of past wrongs. These values oppose the confrontational and competitive alienation of the Millennial mind, which calls for leaders, not followers.

What is the origin of this confrontational alienation? This spring, Pacifica Graduate Institute in California is pondering the central myth of our time in a debate on Carl Jung's Red Book. I would argue that the Millennial myth derives from the Enlightenment era's hyper-rationalism, and an associated arrogance inflated by mechanistic advances in industry, science and technology.

Tuesday, March 18, 2014

Smartphone Brain Scanners


Image Source: emotiv.

In the never-ending quest to quantify reality and generate dubious data sets, we come to the invention of smartphone brain scanners. Real time brain-mapping Emotiv EEG was created in 2011 by researchers at the Technical University of Denmark. Billed as "holding your brain in the palm of your hand," this app demonstrates how Millennial technology puts the cart before the horse, curiously reversing the normal understanding of how we function. Computers are creating the illusion that everything can be understood by being measured and instrumentalized before we consider any other factors: the hand drives the mind, rather than the other way around. This makes us unreflective puppets of concepts such as 'usability.' Make something or do something because we can; build apps around that capability; worry later about what it all means or what it will do to us.

The current applications of this technology relate to medical research. But the tech's inventors are confident that the app will be widely applied for other reasons. From Science Nordic:
Initially, the researchers will use the mobile system for research purposes, but one day this type of small brain scanner will perhaps be something that everyone has.
“There’s a trend at the moment to measure oneself more and more,” says Larsen. “An everyday example is the Runkeeper app for the iPhone, which measures the user’s running and walking trips. There will be more and more of this type of sensor, which you can wear on your body and connect to your mobile phone, so you can see the data that’s collected.”
Jakob Eg Larsen suggests where a mobile brain scanner can be useful: “If you’re about to doze off, you can actually see this from an EEG signal. If you’re driving a car or if you’re a long-distance lorry driver, then you could have this mobile equipment with you and you could have a system that warns you if you’re about to fall sleep.
The tech is open source. Originally out on the Nokia N900, a subsequent variation of the design was made for the Samsung Galaxy Note 2. An iPhone app, Mynd, uses similar technology. Think of the potentials for marketing! Below the jump, a demo video shows that several sets can be worn in social situations and people can observe each other's brains as they interact with one another.

Sunday, December 8, 2013

Detroit and Technology: Race by Any Other Name


Music, the soul of Detroit. The Supremes: Mary Wilson, Florence Ballard and Diana Ross (1964).  Image Source: Gilles Petard / Getty Images via HuffPo.

When Canadian philosopher and communications theorist Marshall McLuhan coined the phrase, "the medium is the message," he provided an elegant shorthand for our present and future realities.

Technology has revolutionized how we communicate. But does the change in how we communicate with technology really leave its stamp on what we communicate? Does the way we are using technology as a medium transform how we understand eternal questions, such as those of race, class, gender, religion, love, government or politics?

"Music is the missing link in Detroit's recovery." Eminem on the cover of Rolling Stone (no. 962, November 25, 2004). Image Source: allposters.

As an example, take race and Detroit. I could have picked any topic in relation to tech communications: class in London, language in Quebec, religion in Saudi Arabia, human rights in China, the economy in Singapore. But given that today is a day of prayer to honour Nelson Mandela in South Africa, I chose race. In 2012, John K. Bennett wrote for HuffPo:
It's no secret that southeastern Michigan for many decades stood as one of the most segregated and racially polarized communities in America.
There is no way here to explain how huge race is as an issue in this bankrupting city and how it has related to other factors which contributed to Detroit's collapse: the economy, deindustrialization, globalization, class, corruption, drugs, crime, political and institutional breakdown, policing and education.

But the focus of this post is not to get into all of that, and instead ask if technology, used as a communications medium, changes the understanding of that picture?

Tuesday, November 26, 2013

Crunching the Dataset of Immortality


The smaller Google barge. Image Source: Portland Tugboat LLC via Business Insider.

Google's expansions show how different aspects of the tech revolution and globalization can combine with radical results. In this case, an investment opportunity arises from a mash-up of social networking, genetics and data processing. The MSM focussed recently on Google barges off the coasts of California and Maine. These are supposedly luxury Google Glass showrooms made out of shipping containers.

But Google is preoccupied with more than gadgets. It has recently set up camp at the crossroads of data-crunching and life itself with its new venture, Calico. Researchers at Calico aim to extend the normal human lifespan by 20 to 100 years. An innovation like that would be worth a lot of money to the company which successfully develops it. Time comments that in 2012, "the regenerative medicine industry was estimated at $1.6 billion. Scientia Advisers, a life sciences consulting firm, estimates that the industry could reach $15 billion to $20 billion over the next 15 years."

With Calico, Google will combine all the bits of private information accumulated from a person's life with, according to Mashable
each person's genome — a partial genome can be mapped today for $99 via 23andMe (another Google investment), but many are hoping a full genome will cost as much in the next few years. Daniel Kraft, medicine and neuroscience chair of Singularity University, affirms that this will require people to relinquish some privacy, in hopes of helping others and themselves, but predicts it to be something many will do. "Lot of folks will be happy to share elements of health history," he says.
From Techland:
Calico ... will be run by Arthur Levinson, former CEO of biotech pioneer Genentech, who will also be an investor. Levinson, who began his career as a scientist and has a Ph.D. in biochemistry, plans to remain in his current roles as the chairman of the board of directors for both Genentech and Apple, a position he took over after its co-founder Steve Jobs died in 2011. In other words, the company behind YouTube and Google+ is gearing up to seriously attempt to extend the human life span. ...

Apple may have set the standard for surprise unveilings, but excepting a major new product every few years, these mostly qualify as short term. Google’s modus operandi, in comparison, is gonzo airdrops into deep “Wait, really?” territory. Last week Apple announced a new iPhone; what did you do this week, Google? Oh, we founded a company that might one day defeat death itself. ...
It’s a lot easier to take Google’s venture seriously if you live under the invisible dome over Silicon Valley, home to a worldview whereby, broadly speaking, there is no problem that can’t be addressed by the application of liberal amounts of technology and everything is solvable if you reduce it to data and then throw enough processing power at it.

The twist is that the technophiles are right, at least up to a point. Medicine is well on its way to becoming an information science: doctors and researchers are now able to harvest and mine massive quantities of data from patients. And Google is very, very good with large data sets.
Like China's one child policy, this project will undoubtedly suffer from unintended and unforeseen consequences.