Giving you personal and shared information on science, technology, technical & Investments in Nigeria

 Special home robot for people with disabilities hits the robot market

Unlike what is already known to us concerning robots, engineers are taking the beneficial fields of the robot to include assisting people with disabilities. According to report, Henry Clever and Phillip Grice/Georgia TechHenry Evans, a California man who participated in Georgia Tech’s study, used PR2 robot to shave, wipe his face, and scratch his head.

Henry uses PR2 to shave
Henry uses PR2 to shave

The benefits humans stand to get from robots are enormous notwithstanding to speculated fear of loss of a job. The fact is that no one wants to be enslaved irrespective of how humble and gentle we may appear. Robots seem to be the only thing that can bridge the gap of being a boss and having a slave polishing your shoes, washing your dishes, among other jobs which ordinarily may not be the wish of the person doing them if not for the stipend given as salary or wages.

Advertisement

Robots generally offer opportunities for people to live safely and comfortably both at home, place of work and while on the street. Imagine a world without modern technologies and artificial intelligence. A world without mobile phones’ voicemail and auto response messages, automated traffic lights on the roads, no automated doors, among others.

Robots in the near future will be able to help us by carrying out most of our chores both at home and office. Examples of such chores include; cooking, cleaning, door/gate security, office attendant, and public advertising agent, among others. Currently, some robots are already doing that, but it has to be autonomous as the future has promised.

Most robots currently operate with human remote/control mechanisms while few operate autonomously. Those on automatic operations have small work sequence carefully programmed for repetition and continuous operations. While those with complex work sequence and irregular operation have to be controlled remotely.

Advertisement

Making robots with human control due to their complex tasks abilities can be harnessed to benefit those with disabilities. A good example is the PR2 made recently and used for shaving among other tasks. From the report, it was gathered that PR2 can be controlled from a computer screen or tablet screen by scrolling on the surface to issue a command of directions to the robot. With its arms that can be equipped with almost anything within its capacity, PR2 can shave, barb, pick up an item and drop it in another location, just to mention few. Its operation will be very helpful to those with a mobility issue or arms problems, etc.

Ideally, the people who need things done would be the people in the loop telling the robot what to do, but that can be particularly challenging for those with disabilities that limit how mobile they are. For example, someone who cannot move his/her arms or hands may find it difficult to control such a robot. To provide an answer to that, a group of roboticists at Georgia Tech led by Charlie Kemp are trying to figure out how to make it possible by developing new interfaces that enable the control of complex robots through the use of a single-button mouse and nothing else.

One of the users involved in the Georgia Tech research is Henry Evans, who has been working with a PR2 and other robotic systems for many years through the robots for Humanity project.  Henry suffered a brain stem stroke several years ago and almost entirely paralyzed and unable to speak. He can move his eyes and click a button with his thumb, which allows him to use an eye-tracking mouse. With just this simple input device, he is able to control the PR2, which has a two-armed mobile manipulator, to do some things for himself, including scratching itches.

PR2 is a very complicated robot with an intimidating 20+ degrees of freedom and even for people with two hands on a game controller and a lot of experience, it is not easy to remote control the robot into doing manipulation tasks. User can encounter more difficulty if restricted to controlling a very 3D robot through a very 2D computer screen. The key is a carefully designed low-level web interface that relies on multiple interface modes and augmented reality for intuitive control of even complex robots.

Advertisement

The approach is to provide an augmented reality (AR) interface running in a standard web Browser with only low-level robot autonomy. Many commercially available assistive input devices such as head trackers, eye-gaze trackers, or voice controls, can provide single-button mouse-type input to a web browser. The standard Web browser enables users with profound motor deficits to use the same methods they already use to access the internet to control the robot. The AR interface uses state-of-the-art visualization to present the robot’s sensor information and options for controlling the robot in a way that people with profound motor deficits have found easy to use.

The PR2’s autonomy limited to low-level operations like tackle-sensor-driven grasping and moving an arm with respect to inverse kinematics to achieve end-effector poses, the robot performs consistently across diverse situations allowing the user to attempt to use it for diverse and novel ways.

The below browser window shows the view through PR2’s cameras of the environment around the robot with superimposed augmented-reality elements. Clicking the yellow disc allows users to control the position of the arm.

Screen display motion interface for PR2
Screen display motion interface for PR2

The interface is based around a 1st person perspective, with a video feed streaming from the robot’s head camera. Augmented reality markers show 3D space controls, provide visual estimates of how the robot will move when commands are executed, and also provide feedback from nonvisual sensors, like tactile sensors and obstacle detection. One of the biggest challenges is how to adequately represent the 3D workspace of the robot through a 2D screen, but a 3D peek feature overlays a Kinect-based low-resolution 3D model of the environment around the robot’s gripper and then simulates a camera rotation. To keep the interface accessible to users with only a mouse and single-click control, there are many different operation modes that can be selected.

Originally posted 2019-04-19 10:18:17.

Advertisement
Advertisement
Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
Advertisement

Leave a Reply

Your email address will not be published.

Advertisement
WP2Social Auto Publish Powered By : XYZScripts.com
error: Content is protected !!