tbyfield on Thu, 28 Mar 2019 16:42:47 +0100 (CET) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: <nettime> rage against the machine |
You wrote:
In the case of the plane crash, it's just out in the open, like in thecase of a massive stock market crash. The difference is only that in thecase of the plane crash, the investigation is also out in the open,while in virtually all other cases, the investigation remains closed tooutsiders, to the degree that there is even one.
Yes and no. In theory, plane crashes happen out in the open compared to other algorithmic catastrophes. In practice, the subsequent investigations have a very 'public secret' quality: vast expanses are cordoned off to be combed for every fragment, however minuscule; the wreckage is meticulously reconstructed in immense closed spaces; forensic regimes — which tests are applied to what objects and why — are very opaque. And, last but not least, is the holy grail of every plane crash, the flight recorder. Its pop name is itself a testament to the point I made earlier in this this thread about how deeply cybernetics and aviation are intertwingled: the proverbial 'black box' of cybernetics became the actual *black box* of aviation. But, if anything, its logic was inverted: in cybernetics the phrase meant a system that can be understood only through its externally observable behavior, but in aviation it's the system that observes and records the plane's behavior.
Black boxes are needed because, unlike car crashes, when planes crash it's best to assume that the operators won't survive. That's where the 'complexity' of your sweeping history comes in.
Goofy dreams of flying cars have been a staple of pop futurism since the 1950s at least, but until very recently those dreams were built on the basis of automobiles — and carried a lot of cultural freight associated with them, as if it were merely a matter of adding a third dimension to their mobility. But that dimension coincides with the axis of gravity: what goes up must come down. The idea that flying cars would be sold, owned, and maintained on an individual basis, like cars, implies that we'd soon start seeing the aerial equivalent of beat-up pickups flying around — another staple of sci-fi since the mid-'70s. It won't happen quite like that.
When cars crash the risks are mainly limited to the operators; when planes crash the risks are much more widespread — tons of debris scattered randomly and *literally* out of the blue. That kind of danger to the public would justify banning them, but of course that won't happen. Instead, the risks will be managed in ways you describe well: "massive computation to cope, not just to handle 'hardware flaws', but to make the world inhabitable, or to keep it inhabitable, for civilization to continue."
The various forms of 'autonomization' of driving we're seeing now are the beginnings of that transformation. It'll require fundamentally different relations between operators and vehicles in order to achieve what really matters: new relations between *vehicles*. So, for example, we're seeing semi-cooperative procedures and standards (like Zipcar), mass choreographic coordination (like Waymo), the precaritizing dissolution of 'ownership' (like Uber); GPS-based wayfinding and remora-sensors (everywhere) and the growing specter of remote control by police forces. None of these things is entirely new, but their computational integration is. And as these threads converge, we can begin to see a more likely future in which few if any own, maintain, or even 'drive' a car — we just summon one, tell it our destination, and 'the cloud' does the rest. Not because this realizes some naive dream of a 'frictionless' future, but because the risks of *real* friction anywhere above 50 meters off the ground are too serious. And, in exchange for giving up the autonomy associated with automobiles, we'll get to live.
That's why criticisms of the 'complexity' of increasingly automated and autonomized vehicles are a dead end, or at least limited to two dimensions. I liked it very much when you wrote that "the rise in complexity in itself is not a bad thing"; and, similarly, giving up autonomy is not in itself a *bad* thing. The question is where and how we draw the lines around autonomy. The fact that some cars will fly doesn't mean that every 'personal mobility device' — say, bicycles — needs to be micromanaged by a faceless computational state run amok. Yet that kind of massive, hysterical, categorical drive to control has been a central feature of the rising computational state for decades.
The system that has worked for the last 40 years is reaching the limitsof the complexity it can handle. The externalities produced by the radical reduction of the lived experience to price signals are coming back to haunt the system, which has no way of coping with it. The attempts to put a price on "bads", say in the case of cap'n'trade have failed. And similarly, the attempts to save the climate are failing.
If a category muddles the difference between something that happens on the ground and 15,000 meters in the air, or blurs basic distinctions between a person and a multinational corporation, it's useless. Complexity does that, which is a good reason to start disassembling it. In *every* case, doing so will require historical 'excursions' into densely technical fields.
Cheers, Ted # distributed via <nettime>: no commercial use without permission # <nettime> is a moderated mailing list for net criticism, # collaborative text filtering and cultural politics of the nets # more info: http://mx.kein.org/mailman/listinfo/nettime-l # archive: http://www.nettime.org contact: nettime@kein.org # @nettime_bot tweets mail w/ sender unless #ANON is in Subject: