Monday, April 6, 2020

Man a Machine

The post title is the English translation of La Mettrie's 1747 work L'homme Machine, an amazingly prescient production from the midst of the Enlightenment. That period was followed, of course, by a Romantic reaction, and we see some of its Gothic obsessions in Mary Shelley's Frankenstein, a book that also "boldly concluded" that matter and energy are enough to produce human life, albeit with tragic consequences. And then came the robots, that reversed the equation -- not man is a machine, but a machine is a man -- which is, oddly, not quite the same. Karl Capek's R.U.R. was surprisingly sympathetic to the reversal, Fritz Lang's Metropolis  (a machine is a woman) not so much. More recently, taking a dim view of the human-machine interaction, there is the Terminator series, or The Matrix; but a more benign or perhaps mixed view can be seen in movies like Her or Ex Machina, or in the compelling TV series Westworld.

After La Mettrie, these examples are all fictional, of course, and are meant just to illustrate the shifting public interest in the issue of machines and humans. In non-fictional matters, though, the non-mechanistic view of human nature is being steadily encroached upon from two directions: neuro-physiology, finding more of the actual mechanisms behind mental phenomena of all kinds, and AI research, able to duplicate in machines more of the phenomena we take to be characteristic of minds. And this process seems of a piece with the long-standing retreat of various notions of human beings having a special place in the world, from Copernicus through Darwin and beyond.

The interesting aspect of it all isn't that machines can look like humans, nor that they can be  "artificially intelligent" -- it's that it's evidently increasingly plausible that they could be artificially conscious, if you like, or at least have "artificial" agency, or the capacity for autonomous behavior. This is the development that makes the issue clear and stark, but in two forms. First, if such machines are "just like us", don't they have a moral status too, just like us, rather than being just complicated mechanical tools? And second, if we're "just like a machine", what happens to our own sense of agency, our sense that we -- whatever "we" means -- are able to determine our own behavior?

From those two questions, let's extract two provisional ideas:
  • One, if we can accept attributing moral agency to machines, as it seems we can, then such agency must be compatible with a mechanistic view of the world. (As a side note, such a mechanistic view can include so-called "stochastic" mechanisms, such as biological cells, that include random as well as determined behavior.)
  • And two, if that's the case, then human nature can be as much a part of a natural, physical, material world as any other nature, including machines of any sort, and still require moral agency/responsibility as an essential characteristic.

How to make sense of such compatibility? Not by multiplying the number of distinct worlds, but by multiplying the number of distinct orientations to the world -- on the one hand, a practical, empirical orientation, that is focused around the idea of cause, and on the other hand, a moral orientation focused around the idea of will or agency, the latter being pertinent to the requirements of a cultural social system.

.
May 29/20

No comments:

Post a Comment