Christmas Celebratory Again In Holy Land Amid Ongoing War; Patriarch Urges Pilgrims To Return
Vatican: Former Choir Director, Manager Convicted Of Embezzlement, Abuse Of Office
Christians in Aleppo feel an uneasy calm amid rebel takeover of Syrian city
Kathmandu synodality forum: Indigenous people, ‘not the periphery but at the heart of the Church’
Indian Cardinal opposes anti-conversion law in poll-bound state
12,000 gather as Goa starts exposition of St. Francis Xavier relics
Kuruvilla Pandikattu
Artificial intelligence has been fuel for science fiction since at least 1920, when the Czech writer Karel Čapek published his play, R.U.R., about a mutiny led by a throng of robots. Speculation about the future of intelligent machines has been pervasive, but recently has taken a more critical turn.
Artificial intelligence is already pervasive. It’s embedded in iPhone’s Siri and Amazon’s Alexa, which are apps designed to answer questions. It powers the code that translates Facebook posts into multiple languages. It’s part of the algorithm that allows Amazon to suggest products to specific users. The AI that is enmeshed in current technology is task-based, or “weak AI.” It is code written to help humans do specific jobs, using a machine as an intermediary; it’s intelligent because it can improve how it performs tasks, collecting data on its interactions, writes Ellen Duffer in “Politics and Religion” website.
The sensationalizing of AI is not a product of weak AI. It is, instead, a fear of “strong AI,” or what AI could someday become: artificial intelligence that is not task-based, but rather replicates human intelligence in a machine.
This strong AI has not yet been achieved, but would, upon its arrival, require a rethinking of most qualities we associate with uniquely human life: consciousness, purpose, intelligence, the soul—in short, personhood. If a machine were to possess the ability to think like a human, or if a machine were able to make decisions autonomously, should it be considered a person, asks Duffer.
Religious communities have a significant stake in topics. Various faiths hold strong opinions regarding creation and the soul. What about issues like freedom, human dignity and uniqueness of human beings?
“The worst-case scenario is that we have two worlds: the technological world and the religious world.” So says Stephen Garner, author of an article on religion and technology, “Image Bearing Cyborgs?” Discouraging discourse between the two communities, he says, would prevent religion from contributing a necessary perspective to technological development—one that, if included, would augment human life and ultimately benefit religion. “If we created artificial intelligence and in doing so we somehow diminished personhood or community or our essential humanity in doing it, then I would say that’s a bad thing.” But, he says, if we can create artificial intelligence in such a way that it allows people to live life more fully, it could bring them closer to God.
Currently, artificial intelligence is simply a tool for improving human experience. It can help us build cars, diagnose illnesses, and make financial decisions. It is easy to imagine a world in which our technology slowly becomes more and more intelligent, more and more self-aware. Strong AI, by definition though, is human-like in intelligence and ability. Its development, he says, would force humans to reconsider how to appropriately interact with this technology. For example, what rights the machines should be afforded, for instance, if their intelligence affords them a designation beyond that of mere tools. McGrath says. “Do we risk enslaving a sentient, self-aware entity, or do we say, ‘We’re going to do whatever it takes to make sure that that doesn’t happen even by accident’?”
Religion, together with ethics, has to be an important dialogue partner in fashioning Strong AI.
kuru@jdv.edu.in
Leave a Comment