Você está na página 1de 6

they merit a similar moral standing.

Another might believe that gorillas get their


standing from a cognitive dissimilarity: because of our advanced powers of reason, we
If Animals Have Rights, Should Robots? are called to rise above the cat-eat-mouse game, to be special protectors of animals,
from chickens to chimpanzees. (Both views also support untroubled omnivorism: we
We can think of ourselves as an animal’s peer—or its protector. What will robots kill animals because we are but animals, or because our exceptionalism means that
decide about us? human interests win.) These beliefs, obviously opposed, mark our uncertainty about
whether we’re rightful peers or masters among other entities with brains. “One does
By Nathan Heller | November 20, 2016 not meet oneself until one catches the reflection from an eye other than human,” the
anthropologist and naturalist Loren Eiseley wrote. In confronting similarity and
Harambe, a gorilla, was described as “smart,” “curious,” “courageous,” difference, we are forced to set the limits of our species’ moral reach.
“magnificent.” But it wasn’t until last spring that Harambe became famous, too. On
May 28th, a human boy, also curious and courageous, slipped through a fence at the Today, however, reckonings of that sort may come with a twist. In an automated world,
Cincinnati Zoo and landed in the moat along the habitat that Harambe shared with two the gaze that meets our own might not be organic at all. There’s a growing chance that
other gorillas. People at the fence above made whoops and cries and other noises of it will belong to a robot: a new and ever more pervasive kind of independent mind.
alarm. Harambe stood over the boy, as if to shield him from the hubbub, and then, Traditionally, the serial abuse of Siri or violence toward driverless cars hasn’t stirred
grabbing one of his ankles, dragged him through the water like a doll across a up Harambe-like alarm. But, if like-mindedness or mastery is our moral standard, why
playroom floor. For a moment, he took the child delicately by the waist and propped should artificial life with advanced brains and human guardianships be exempt? Until
him on his legs, in a correct human stance. Then, as the whooping continued, he we can pinpoint animals’ claims on us, we won’t be clear about what we owe robots—
knocked the boy forward again, and dragged him halfway through the moat. or what they owe us.
Harambe was a seventeen-year-old silverback, an animal of terrific strength. When A simple case may untangle some of these wires. Consider fish. Do they merit what
zookeepers failed to lure him from the boy, a member of their Dangerous Animal D. H. Lawrence called humans’ “passionate, implicit morality”? Many people have a
Response Team shot the gorilla dead. The child was hospitalized briefly and released, passionate, implicit response: No way, fillet. Jesus liked eating fish, it would seem;
declared to have no severe injuries. following his resurrection, he ate some, broiled. Few weekenders consider fly-fishing
an expression of rage and depravity (quite the opposite), and sushi diners
Harambe, in Swahili, means “pulling together.” Yet the days following the death ordering kuromaguro are apt to feel pangs from their pocketbooks more than from
seemed to pull people apart. “We did not take shooting Harambe lightly, but that their souls. It is not easy to love the life of a fish, in part because fish don’t seem very
child’s life was in danger,” the zoo’s director, Thane Maynard, explained. enamored of life themselves. What moral interest could they hold for us?
Primatologists largely agreed, but some spectators were distraught. A Facebook group
called Honoring Harambe appeared, featuring fan portraits, exchanges with the “What a Fish Knows: The Inner Lives of Our Underwater Cousins” (Scientific
hashtag #JusticeforHarambe, and a meditation, “May We Always Remember American/Farrar, Straus & Giroux) is Jonathan Balcombe’s exhaustively researched
Harambe’s Sacrifice. . . . R.I.P. Hero.” The post was backed with music. and elegantly written argument for the moral claims of ichthyofauna, and, to cut to the
chase, he thinks that we owe them a lot. “When a fish takes notice of us, we enter the
As the details of the gorilla’s story gathered in the press, he was often depicted in a conscious world of another being,” Balcombe, the Humane Society’s director for
stylish wire-service shot, crouched with an arm over his right knee, brooding at the animal sentience, writes. “Evidence indicates a range of emotions in at least some
camera like Sean Connery in his virile years. “This beautiful gorilla lost his life fishes, including fear, stress, playfulness, joy, and curiosity.” Balcombe’s wish for joy
because the boy’s parents did not keep a closer watch on the child,” a petition calling to the fishes (a plural he prefers to “fish,” the better to mark them as individuals)
for a criminal investigation said. It received half a million signatures—several hundred may seem eccentric to readers who look into the eyes of a sea bass and see nothing.
thousand more, CNN noted, than a petition calling for the indictment of Tamir Rice’s But he suggests that such indifference reflects bias, because the experience of fish—
shooters. People projected thoughts into Harambe’s mind. “Our tendency is to see our and, by implication, the experience of many lower-order creatures—is nearer to ours
actions through human lenses,” a neuroscientist named Kurt Gray told the network as than we might think.
the frenzy peaked. “We can’t imagine what it’s like to actually be a gorilla. We can
only imagine what it’s like to be us being a gorilla.” Take fish pain. Several studies have suggested that it isn’t just a reflexive response,
the way your hand pulls back involuntarily from a hot stove, but a version of the ouch!
This simple fact is responsible for centuries of ethical dispute. One Harambe activist that hits you in your conscious brain. For this reason and others, Balcombe thinks that
might believe that killing a gorilla as a safeguard against losing human life is unjust fish behavior is richer in intent than previously suspected. He touts the frillfin goby,
due to our cognitive similarity: the way gorillas think is a lot like the way we think, so
which memorizes the topography of its area as it swims around, and then, when the
tide is low, uses that mental map to leap from one pool to the next. Tuskfish are adept Animals, though, obviously interact with the world differently from the way that plants
at using tools (they carry clams around, for smashing on well-chosen rocks), while and random objects do. The grass hut does not care whether it is burned to ash or left
cleaner wrasses outperform chimpanzees on certain inductive-learning tests. Some intact. But the heretic on the pyre would really rather not be set aflame, and so,
fish even go against the herd. Not all salmon swim upstream, spawn, and die, we learn. perhaps, would the pig on the spit. Colb and Dorf refer to this as having “interests,” a
A few turn around, swim back, and do it all again. term that—not entirely to their satisfaction—often carries overtones of utilitarianism,
the ethical school of thought based on the pursuit of the greatest good over all. Jeremy
From there, it is a short dive to the possibility of fish psychology. Some stressed-out Bentham, its founder, mentioned animals in a resonant footnote to his “An
fish enjoy a massage, flocking to objects that rub their flanks until their cortisol levels Introduction to the Principles of Morals and Legislation” (1789):
drop. Male pufferfish show off by fanning elaborate geometric mandalas in the sand
and decorating them, according to their taste, with shells. Balcombe reports that the The day may come, when the rest of the animal creation may acquire those rights
female brown trout fakes the trout equivalent of orgasm. Nobody, probably least of all which never could have been withholden from them but by the hand of tyranny. . . .
the male trout, is sure what this means. The question is not, Can they reason? nor, Can they talk? but, Can they suffer?

Balcombe thinks the idea that fish are nothing like us arises out of prejudice: we can If animals suffer, the philosopher Peter Singer noted in “Animal Liberation” (1975),
empathize with a hamster, which blinks and holds food in its little paws, but the shouldn’t we include them in the calculus of minimizing pain? Such an approach to
fingerless, unblinking fish seems too “other.” Although fish brains are small, to peership has advantages: it establishes the moral claims of animals without projecting
assume that this means they are stupid is, as somebody picturesquely tells him, “like human motivations onto them. But it introduces other problems. Bludgeoning your
arguing that balloons cannot fly because they don’t have wings.” Balcombe neighbor is clearly worse than poisoning a rat. How can we say so, though, if the
overcompensates a bit, and his book is peppered with weird, anthropomorphizing entity’s suffering matters most?
anecdotes about people sharing special moments with their googly-eyed friends. But
his point stands. If we count fish as our cognitive peers, they ought to be included in Singer’s answer would be the utilitarian one: it’s not about the creature; it’s about the
our circle of moral duty. system as a whole. The murder of your neighbor will distribute more pain than the
death of a rat. Yet the situations in which we have to choose between animal life and
Quarrels come at boundary points. Should we consider it immoral to swat a mosquito? human life are rare, and minimizing suffering for animals is often easy. We can stop
If these insects don’t deserve moral consideration, what’s the crucial quality they lack? herding cows into butchery machines. We can barbecue squares of tofu instead of
A worthwhile new book by the Cornell law professors Sherry F. Colb and Michael C. chicken thighs. Most people, asked to drown a kitten, would feel a pang of moral
Dorf, “Beating Hearts: Abortion and Animal Rights” (Columbia), explores the anguish, which suggests that, at some level, we know suffering matters. The wrinkle
challenges of such border-marking. The authors point out that, oddly, there is little is that our antennae for pain are notably unreliable. We also feel that pang regarding
overlap between animal-rights supporters and pro-life supporters. Shouldn’t the objects—for example, robots—that do not suffer at all.
rationale for not ending the lives of neurologically simpler animals, such as fish, share
grounds with the rationale for not terminating embryos? Colb and Dorf are pro-choice Last summer, a group of Canadian roboticists set an outlandish invention loose on the
vegans (“Our own journey to veganism began with the experience of sharing our lives streets of the United States. They called it hitchBOT, not because it was a heavy-
with our dogs”), so, although they note the paradox, they do not think a double smoking contrarian with a taste for Johnnie Walker Black—the universe is not that
standard is in play. generous—but because it was programmed to hitchhike. Clad in rain boots, with a
goofy, pixellated smile on its “face” screen, hitchBOT was meant to travel from Salem,
The big difference, they argue, is “sentience.” Many animals have it; zygotes and Massachusetts, to San Francisco, by means of an outstretched thumb and a supposedly
embryos don’t. Colb and Dorf define sentience as “the ability to have subjective endearing voice-prompt personality. Previous journeys, across Canada and around
experiences,” which is a little tricky, because animal subjectivity is what’s hard for us Europe, had been encouraging: the robot always reached its destination. For two
to pin down. A famous paper called “What Is It Like to Be a Bat?,” by the philosopher weeks, hitchBOTtoured the Northeast, saying inviting things such as “Would you like
Thomas Nagel, points out that even if humans were to start flying, eating bugs, and to have a conversation? . . . I have an interest in the humanities.” Then it disappeared.
getting around by sonar they would not have a bat’s full experience, or the batty On August 1st, it was found next to a brick wall in Philadelphia, beat up and
subjectivity that the creature had developed from birth. Colb and Dorf sometimes fall decapitated. Its arms had been torn off.
into such a trap. In one passage, they suggest that it doesn’t matter whether animals Response was swift. “I can’t lie. I’m still devastated by the death of hitchBOT,” a
are aware of pain, because “the most searing pains render one incapable of reporter tweeted. “The destruction of hitchBOT is yet another reminder that our society
understanding pain or anything else”—a very human read on the experience. has a long way to go,” a blogger wrote.
Humans’ capacity to develop warm and fuzzy feelings toward robots is the basis for a Consider the logic, or illogic, of animal-cruelty laws. New York State forbids inflicting
blockbuster movie genre that includes “WALL-E” and “A.I.,” and that peaks in the “Star pain on pets but allows fox trapping; prohibits the electrocution of “fur-bearing”
Wars” universe, a multigenerational purgatory of interesting robots and tedious animals, such as the muskrat, but not furry animals, such as the rat; and bans decorative
people. But the sentiment applies in functional realms, too. At one point, a roboticist tattoos on your dog but not on your cow. As Darling puts it, “Our apparent desire to
at the Los Alamos National Laboratory built an unlovable, centipede-like robot protect those animals to which we more easily relate indicates that we may care more
designed to clear land mines by crawling forward until all its legs were blown off. about our own emotional state than any objective biological criteria.” She looks to
During a test run, in Arizona, an Army colonel ordered the exercise stopped, because, Kant, who saw animal ethics as serving people. “If a man has his dog shot . . . he
according to the Washington Post, he found the violence to the robot “inhumane.” thereby damages the kindly and humane qualities in himself,” he wrote in “Lectures
on Ethics.” “A person who already displays such cruelty to animals is also no less
By Singer’s standard, this is nonsense. Robots are not living, and we know for sure hardened toward men.”
that they don’t suffer. Why do even hardened colonels, then, feel shades of ethical
responsibility toward such systems? A researcher named Kate Darling, with This isn’t peership morality. It looks like a kind of passive guardianship, with the
affiliations at M.I.T., Harvard, and Yale, has recently been trying to understand what humans striving to realize their exalted humanness, and the animals—or the robots—
is at stake in robo bonds of this kind. In a paper, she names three factors: physicality benefitting from the trickle-down effects of that endeavor. Darling suggests that it suits
(the object exists in our space, not onscreen), perceived autonomous movement (the an era when people, animals, and robots are increasingly swirled together. Do we
object travels as if with a mind of its own), and social behavior (the robot is expect our sixteen-month-old children to understand why it’s cruel to pull the tail of
programmed to mimic human-type cues). In an experiment that Darling and her the cat but morally acceptable to chase the Roomba? Don’t we want to raise a child
colleagues ran, participants were given Pleos—small baby Camarasaurus robots—and without the impulse to terrorize any lifelike thing, regardless of its putative ontology?
were instructed to interact with them. Then they were told to tie up the Pleos and beat To the generation now in diapers, carrying on a conversation with an artificial
them to death. Some refused. Some shielded the Pleos from the blows of others. One intelligence like Siri is the natural state of the world. We talk to our friends, we talk to
woman removed her robot’s battery to “spare it the pain.” In the end, the participants our devices, we pet our dogs, we caress our lovers. In the flow of modern life, Kant’s
were persuaded to “sacrifice” one whimpering Pleo, sparing the others from their fate. insistence that rights obtain only in human-on-human action seems unhelpfully
restrictive.
Darling, trying to account for this behavior, suggests that our aversion to abusing Here a hopeful ethicist might be moved, like Gaston Modot in “L’Age d’Or,” to kick
lifelike machines comes from “societal values.” While the rational part of our mind a small dog in exasperation. Addressing other entities as moral peers seems a
knows that a Pleo is nothing but circuits, gears, and software—a machine that can be nonstarter: it’s unclear where the boundary of peership begins, and efforts to figure it
switched off, like a coffeemaker—our sympathetic impulses are fooled, and, because out snag on our biases and misperceptions. But acting as principled guardians confines
they’re fooled, to beat the robot is to train them toward misconduct. (This is the other creatures to a lower plane—and the idea that humans are the special masters of
principle of HBO’s popular new show “Westworld,” on which the abuse of advanced the universe, charged with administration of who lives and thrives and dies, seems
robots is emblematic of human perfidy.) “There is concern that mistreating an object outdated. Benthamites like Peter Singer can get stuck at odd extremes, too. If avoiding
that reacts in a lifelike way could impact the general feeling of empathy we experience suffering is the goal, there is little principled basis to object to the painless extinction
when interacting with other entities,” Darling writes. The problem with torturing a of a whole species.
robot, in other words, has nothing to do with what a robot is, and everything to do with
what we fear most in ourselves. Where to turn? Some years ago, Christine M. Korsgaard, a Harvard philosopher and
Kant scholar, started working on a Kantian case for animal rights (one based on
Such concerns, like heartland rivers flowing toward the Mississippi, approach a big principles of individual freedom rather than case-by-case suffering, like Singer’s). Her
historical divide in ethics. On one bank are the people, such as Bentham, who believe first obstacle was Kant himself. Kant thought that rights arose from rational will,
that morality is determined by results. (It’s morally O.K. to lie about your cubicle- clearing a space where each person could act as he or she reasoned to be good without
mate’s demented-looking haircut, because telling the truth will just bring unhappiness the tyranny of others’ thinking. (My property rights keep you from installing a giant
to everyone involved.) On the other bank are those who think that morality rests on hot tub on my front lawn, even if you deem it good. This frees me to use my lawn in
rights and rules. (A moral person can’t be a squeamish liar, even about haircuts.) whatever non-hot-tub-involving way I deem good.) Animals can’t reason their way to
Animal ethics has tended to favor the first group: people were urged to consider their choices, Kant noted, so the freedom of rights would be lost on them. If the nectar-
actions’ effects on living things. But research like Darling’s makes us wonder whether drinking hummingbird were asked to exercise her will to the highest rational standard,
the way forward rests with the second—an accounting of rights and obligations, rather she’d keep flying from flower to flower.
than a calculation of consequences.
Korsgaard argued that hanging everything on rational choice was a red herring, circumstances?), and it holds a mirror to the harsh decisions we, as humans, make but
however, because humans, even for Kant, are not solely rational beings. They also act like to overlook. The horrifying edge of A.I. is not really HAL 9000, the rogue machine
on impulse. The basic motivation for action, she thought, arises instead from an ability that doesn’t wish to be turned off. It is the ethically calculating car, the military drone:
to experience stuff as good or bad, which is a trait that animals share. If we, as humans, robots that do precisely what we want, and mechanize our moral behavior as a result.
were to claim rights to a dog’s mind and body in the way we claim rights to our yard, A fashionable approach in the academic humanities right now is “posthumanism,”
we would be exercising arbitrary power, and arbitrary power is what Kant seeks to which seeks to avoid the premise—popular during the Enlightenment, but
avoid. So, by his principles, animals must have freedom—that is, rights—over their questionable based on present knowledge—that there’s something magical in
bodies. humanness. A few posthumanists, such as N. Katherine Hayles, have turned to
cyborgs, whether mainstream (think wearable tech) or extreme, to challenge old ideas
This view doesn’t require animals to weigh in for abstract qualities such as about mind and body being the full package of you. Others, such as Cary Wolfe, have
intelligence, consciousness, or sentience. Strictly, it doesn’t even command us never pointed out that prosthesis, adopting what’s external, can be a part of animal life, too.
to eat poached eggs or venison. It extends Enlightenment values—a right to choice in Posthumanism ends its route, inevitably, in a place that much resembles humanism, or
life, to individual freedom over tyranny—to creatures that may be in our custody. Let at least the humane. As people, we realize our full selves through appropriation; like
those chickens range! it says. Give salmon a chance to outsmart the net in the open most animals and robots, we approach maturity by taking on the habits of the world
ocean, instead of living an aquacultural-chattel life. We cannot be sure whether the around us, and by wielding tools. The risks of that project are real. Harambe, born
chickens and the fish will care, but for us, the humans, these standards are key to within a zoo, inhabited a world of human invention, and he died as a result. That this
avoiding tyrannical behavior. still haunts us is our species’ finest feature. That we honor ghosts more than the living
is our worst. ♦
Robots seem to fall beyond such humanism, since they lack bodily freedom. (Your
self-driving car can’t decide on its own to take off for the beach.) But leaps in machine
learning, by which artificial intelligences are programmed to teach themselves, have
started pushing at that premise. Will the robots ever be due rights? John Markoff,
a Times technology reporter, raises this question in “Machines of Loving Grace”
(Ecco). The matter is charged, in part because robots’ minds, unlike animals’, are made
in the human image; they have a potential to challenge and to beat us at our game.
Markoff elaborates a common fear that robots will smother the middle class:
“Technology will not be a fount of economic growth, but will instead pose a risk to all
routinized and skill-based jobs that require the ability to perform diverse kinds of
‘cognitive’ labor.” Don’t just worry about the robots obviating your job on the
assembly line, in other words; worry about them surpassing your expertise at the
examination table or on the brokerage floor. No wall will guard U.S. jobs from the big
encroachment of the coming years. Robots are the fruit of American ingenuity, and
they are at large, learning everything we know.

That future urges us to get our moral goals in order now. A robot insurgency is unlikely
to take place as a battle of truehearted humans against hordes of evil machines. It will
probably happen in a manner already begun: by a symbiosis with cheap, empowering
intelligences that we welcome into daily life. Phones today augment our memories;
integrated chatbots spare us customer-service on-hold music; apps let us chase
Pokémon across the earth. Cyborg experience is here, and it hurts us not by being cruel
but by making us take note of limits in ourselves.

The classic problem in the programming of self-driving cars concerns accident


avoidance. What should a vehicle do if it must choose between swerving into a crowd
of ten people or slamming into a wall, killing its owner? The quandary is not just
ethical but commercial (would you buy a car programmed to kill you under certain
Artificial Consciousness: How To Give A Robot A Soul “As to whether a computer could ever harbor a divinely created soul: I wouldn’t dare to
speculate,” added Fulda.
We can’t program a conscious robot with a soul if we can’t agree on what that means.
There are two main problems that need resolving. The first is one of semantics: it is very
Dan Robitzski | June 25, 2018 hard to define what it truly means to be conscious or sentient, or what it might mean to
have a soul or soul-function, as that BBC article describes it.
The Terminator was written to frighten us; WALL-E was written to make us cry. Robots
can’t do the terrifying or heartbreaking things we see in movies, but still the question The second problem is one of technological advancement. Compared to the technology
lingers: What if they could? that would be required to create artificial sentience — whatever it may look like or however
we may choose to define it — even our most advanced engineers are still huddled in caves,
Granted, the technology we have today isn’t anywhere near sophisticated enough to do any rubbing sticks together to make a fire and cook some woolly mammoth steaks.
of that. But people keep asking. At the heart of those discussions lies the question: can
machines become conscious? Could they even develop — or be programmed to contain At a panel last year, biologist and engineer Christof Koch squared off with David
— a soul? At the very least, could an algorithm contain something resembling a soul? Chalmers, a cognitive scientist, over what it means to be conscious. The conversation
bounced between speculative thought experiments regarding machines and zombies
The answers to these questions depend entirely on how you define these things. So far, we (defined as those who act indistinguishably from people but lack an internal mind). It
haven’t found satisfactory definitions in the 70 years since artificial intelligence first frequently veered away from things that can be conclusively proven with scientific
emerged as an academic pursuit. evidence. Chalmers argued that a machine, one more advanced than we have
today, could become conscious, but Koch disagreed, based on the current state of
Take, for example, an article recently published on BBC, which tried to grapple with the neuroscience and artificial intelligence technology.
idea of artificial intelligence with a soul. The authors defined what it means to have an
immortal soul in a way that steered the conversation almost immediately away from the Neuroscience literature considers consciousness a narrative constructed by our brains that
realm of theology. That is, of course, just fine, since it seems unlikely that an old robed incorporates our senses, how we perceive the world, and our actions. But even within that
man in the sky reached down to breath life into Cortana. But it doesn’t answer the central definition, neuroscientists struggle to define why we are conscious and how best to define
question — could artificial intelligence ever be more than a mindless tool? it in terms of neural activity. And for the religious, is this consciousness the same as that
which would be granted by having a soul? And this doesn’t even approach the subject of
technology.
That BBC article set out the terms — that an AI system that acts as though it has a soul will “AI people are routinely confusing soul with mind or, more specifically, with the capacity
be determined by the beholder. For the religious and spiritual among us, a sufficiently- to produce complicated patterns of behavior,” Ondřej Beran, a philosopher and ethicist at
advanced algorithm may seem to present a soul. Those people may treat it as such, since University of Pardubice, told Futurism.
they will view the AI system’s intelligence, emotional expression, behavior, and perhaps
even a belief in a god as signs of an internal something that could be defined as a soul. “AI people are routinely confusing soul with mind”
As a result, machines containing some sort of artificial intelligence could simultaneously “The role that the concept of soul plays in our culture is intertwined with contexts in which
be seen as an entity or a research tool, depending on who you ask. Like with so many we say that someone’s soul is noble or depraved,” Beran added — that is, it comes with a
things, much of the debate over what would make a machine conscious comes down to value judgment. “[In] my opinion what is needed is not a breakthrough in AI science or
what of ourselves we project onto the algorithms. engineering, but rather a general conceptual shift. A shift in the sensitivities and the
“I’m less interested in programming computers than in nurturing little proto- imagination with which people use their language in relating to each other.”
entities,” Nancy Fulda, a computer scientist at Brigham Young University, told Futurism.
“It’s the discovery of patterns, the emergence of unique behaviors, that first drew me to Beran gave the example of works of art generated by artificial intelligence. Often, these
computer science. And it’s the reason I’m still here.” works are presented for fun. But when we call something that an algorithm creates “art,”
we often fail to consider whether the algorithm has merely generated sort of image or
Fulda has trained AI algorithms to understand contextual languageand is working to build melody or created something that is meaningful — not just to an audience, but to itself. Of
a robotic theory of mind, a version of the principle in human (and some animal) psychology course, human-created art often fails to reach that second group as well. “It is very unclear
that lets us recognize others as beings with their own thoughts and intentions. But, you what it would mean at all that something has significance for an artificial intelligence,”
know, for robots. Beran added.
dissociated fragments of consciousness to become distinct individuals. He clarified that he
So would a machine achieve sentience when it is able to internally ponder rather than believes that even a general AI — the name given to the sort of all-encompassing AI that
mindlessly churn inputs and outputs? Or is would it truly need that we see in science fiction — may someday come to be, but that even such an AI system
internal something before we as a society consider machines to be conscious? Again, the could never have private, conscious inner thoughts as humans do.
answer is muddled by the way we choose to approach the question and the specific
definitions at which we arrive. “Sophia, unfortunately, is ridiculous at best. And, what’s more important, we still relate to
her as such,” said Beran, referring to Hanson Robotics’ partially-intelligent robot.
“I believe that a soul is not something like a substance,” Vladimir Havlík, a philosopher at
the Czech Academy of Sciences who has sought to define AI from an evolutionary Even more unfortunate, there’s a growing suspicion that our approach to developing
perspective, told Futurism. “We can say that it is something like a coherent identity, which advanced artificial intelligence could soon hit a wall. An article published last week in The
is constituted permanently during the flow of time and what represents a man,” he added. New York Times cited multiple engineers who are growing increasingly skeptical that our
machine learning, even deep learning technologies will continue to grow as they have in
Havlík suggested that rather than worrying about the theological aspect of a soul, we could recent years.
define a soul as a sort of internal character that stands the test of time. And in that sense, he
sees no reason why a machine or artificial intelligence system couldn’t develop a character I hate to be a stick in the mud. I truly do. But even if we solve the semantic debate over
— it just depends on the algorithm itself. In Havlík’s view, character emerges from what it means to be conscious, to be sentient, to have a soul, we may forever lack the
consciousness, so the AI systems that develop such a character would need to be based on technology that would bring an algorithm to that point.
sufficiently advanced technology that they can make and reflect on decisions in a way that
compares past outcomes with future expectations, much like how humans learn about the But when artificial intelligence first started, no one could have predicted the things it can
world. do today. Sure, people imagined robot helpers à la the Jetsons or advanced transportation
à la Epcot, but they didn’t know the tangible steps that would get us there. And today, we
But the question of whether we can build a souled or conscious machine only matters to don’t know the tangible steps that will get us to machines that are emotionally intelligent,
those who consider such distinctions important. At its core, artificial intelligence is a tool. sensitive, thoughtful, and genuinely introspective.
Even more sophisticated algorithms that may skirt the line and present as conscious entities
are recreations of conscious beings, not a new species of thinking, self-aware creatures. By no means does that render the task impossible — we just don’t know how to get
there yet. And the fact that we haven’t settled the debate over where to actually place the
“My approach to AI is essentially pragmatic,” Peter Vamplew, an AI researcher at finish line makes it all the more difficult.
Federation University, told Futurism. “To me it doesn’t matter whether an AI system has
real intelligence, or real emotions and empathy. All that matters is that it behaves in a “We still have a long way to go,” says Fulda. She suggests that the answer won’t be piecing
manner that makes it beneficial to human society.” together algorithms, as we often do to solve complex problems with artificial intelligence.

“To me it doesn’t matter whether an AI system has real intelligence… All that “You can’t solve one piece of humanity at a time,” Fulda says. “It’s a gestalt experience.”
matters is that it behaves in a manner that makes it beneficial to human society.” For example, she argues that we can’t understand cognition without understanding
perception and locomotion. We can’t accurately model speech without knowing how to
To Vamplew, the question of whether a machine can have a soul or not is only meaningful model empathy and social awareness. Trying to put these pieces together in a machine one
when you believe in souls as a concept. He does not, so it is not. He feels that machines at a time, Fulda says, is like recreating the Mona Lisa “by dumping the right amounts of
may someday be able to recreate convincing emotional responses and act as though they paint into a can.”
are human but sees no reason to introduce theology into the mix.
Whether or not the masterpiece is out there, waiting to be painted, remains to be
And he’s not the one who feels true consciousness is impossible in machines. “I am very determined. But if it is, researchers like Fulda are vying to be the one to brush the strokes.
critical of the idea of artificial consciousness,” Bernardo Kastrup, a philosopher and AI Technology will march onward, so long as we continue to seek answers to questions like
researcher, told Futurism. “I think it’s nonsense. Artificial intelligence, on the other hand, these. But as we compose new code that will make machines do things tomorrow that we
is the future.” couldn’t imagine yesterday, we still need to sort out where we want it all to lead.

Kastrup recently wrote an article for Scientific American in which he lays out his argument Will we be da Vinci, painting a self-amused woman who will be admired for centuries, or will we be Uranus, creating gods
that consciousness is a fundamental aspect of the natural universe, and that people tap into who will overthrow us? Right now, AI will do exactly what we tell AI to do, for better or worse. But if we move towards
algorithms that begin to, at the very least, present as sentient, we must figure out what that means.

Você também pode gostar