Still trust binary robot plus app
Robots have begun to play an increasing role in life still trust binary robot plus app death scenarios, from rescue missions to complex surgical procedures. But the question of trust has largely been a one-way street.
Should we trust robots with our lives? A Tufts University lab is working to turn the notion on its head, asking the perhaps equally important inverse. Should robots trust us? The walls are white and bare, for reasons, they explain, of optimizing robotic vision. It all feels a touch makeshift, trading solid walls for shower curtains strung from the ceilings with wire. The demo is equally minimalist in its presentation. Its directive to obey its operator has been overridden by the knowledge that it cannot proceed.
Its computer vision has spotted an obstacle in the way. It knows enough not to walk into walls. The robot has been equipped with the vision needed to detect a wall and sense to avoid it. It is not a thing that can be gained or lost. The operator explains that the wall is not solid. It walks forward, with a newfound confidence, feet clomping and gears buzzing as it makes short work still trust binary robot plus app the hollow obstacle. This very simplistic idea of trust serves as another source of information for the robot.
Trusting a human counterpart in this case can help the robot adapt to real-world settings for which its programmers may not have accounted. In this case, the operator is a trusted source, so Still trust binary robot plus app who, along with its counterpart Shafer, is named, fittingly, for a theory of reasoning with uncertainty acts on that information, walking straight through the cardboard wall.
Trust is an import aspect in the burgeoning world of human-robotics relationships. And like still trust binary robot plus app, part of that adaptation comes through knowing whom to trust. Scheutz offers a pair of simple examples to illustrate the point. In one, a domestic robot goes shopping for its owner. When a stranger tells it to get into their car, the robot will not simply comply, as the person is not a trusted source. The Human Robot Interaction Laboratory trades in these questions.
People can be malicious for the sake of self-preservation or, in the case of, say, Tay, the Twitter chatbot Microsoft launched last year, entertainment. It took all of 16 hours for the company to abandon the experiment after it devolved into a torrent of sex talk and hate speech. A key factor of trust is knowing when to be guarded. MIT, famously, has been running an on-going open-source field study called the Moral Machine, aimed at asking some of the big moral questions these vehicles will ultimately be tasked with executing in a manner of milliseconds.
Those questions, modern-day spin-offs of the trolley problemare good distillation of some of the philosophical questions. The idea of trust is one level of many in building that relationship.
In the case of Dempster, trust is still an idea programmed directly into the code still trust binary robot plus app the robot, rather than something gained and lost over time. For Dempster, trust is coded, still trust binary robot plus app earned, and until programmed otherwise, it will remain a glutton for punishment.
Among the myriad projects the Tufts team is working on is natural language interaction. Spoken and visual commands that can teach a robot to execute a task without entering a complex line of code.
An operator asks one of the robots to do a squat. So the operator walks it through the steps: The information is stored in its memory bank. Now Dempster can do a squat. You want to be able to tell it and maybe show it and you want it to know how to do it. The Tufts lab takes it a step further by networking the robots. A sort of robot Wikipedia. Of course, such a massively connected robotics network once again still trust binary robot plus app questions around trust, after so many decades of tales of robocalypse.
All the more reason to work out all of these notions of trust and morality at this early stage of the game. The man asks the robot to stand.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.