Unlike what is already known to us concerning , engineers are taking the beneficial fields of the to include assisting people with disabilities. According to report, Henry Clever and Phillip Grice/Georgia TechHenry Evans, a California man who participated in Tech’s study, used to shave, wipe his face, and scratch his head.
The benefits humans stand to get from are enormous notwithstanding to speculated fear of loss of a job. The fact is that no one wants to be enslaved irrespective of how humble and gentle we may appear. seem to be the only thing that can the gap of being a boss and having a slave polishing your shoes, washing your dishes, among other jobs which ordinarily may not be the wish of the person doing them if not for the stipend given as salary or wages.
generally offer opportunities for people to live safely and comfortably both at home, place of and while on the street. Imagine a world without and . A world without mobile phones’ voicemail and auto response messages, automated traffic lights on the roads, no automated , among others.
in the near future will be able to help us by carrying out most of our chores both at home and office. Examples of such chores include; cooking, cleaning, door/gate security, office attendant, and public advertising agent, among others. Currently, some are already doing that, but it has to be as the future has promised.
Most currently operate with human remote/control mechanisms while few operate autonomously. Those on operations have small sequence carefully programmed for repetition and continuous operations. While those with complex sequence and irregular operation have to be controlled remotely.
Making with human control due to their complex tasks abilities can be harnessed to benefit those with disabilities. A good example is the PR2 made recently and used for shaving among other tasks. From the report, it was gathered that PR2 can be controlled from a computer screen or tablet screen by scrolling on the surface to issue a command of directions to the . With its arms that can be equipped with almost anything within its capacity, PR2 can shave, barb, pick up an item and drop it in another location, just to mention few. Its operation will be very helpful to those with a issue or arms problems, etc.
Ideally, the people who need things done would be the people in the loop telling the what to do, but that can be particularly challenging for those with disabilities that limit how mobile they are. For example, someone who cannot move his/her arms or hands may find it difficult to control such a . To provide an answer to that, a group of roboticists at Tech led by Charlie Kemp are trying to figure out how to make it possible by developing new interfaces that enable the control of complex through the use of a single-button mouse and nothing else.
One of the users involved in the Tech research is Henry Evans, who has been working with a PR2 and other systems for many years through the for Humanity project. Henry suffered a brain stem several years ago and almost entirely paralyzed and unable to speak. He can move his eyes and click a button with his thumb, which allows him to use an eye-tracking mouse. With just this simple input device, he is able to control the PR2, which has a two-armed mobile manipulator, to do some things for himself, including scratching itches.
PR2 is a very complicated with an intimidating 20+ degrees of freedom and even for people with two hands on a game controller and a lot of experience, it is not easy to remote control the into doing manipulation tasks. User can encounter more difficulty if restricted to controlling a very 3D through a very 2D computer screen. The key is a carefully designed low-level web interface that relies on multiple interface modes and augmented reality for intuitive control of even complex .
The approach is to provide an augmented reality (AR) interface running in a standard web Browser with only low-level robot autonomy. Many commercially available assistive input devices such as head trackers, eye-gaze trackers, or voice controls, can provide single-button mouse-type input to a web browser. The standard Web browser enables users with profound motor deficits to use the same methods they already use to access the internet to control the . The AR interface uses state-of-the-art visualization to present the robot’s information and options for controlling the in a way that people with profound motor deficits have found easy to use.
The PR2’s autonomy limited to low-level operations like tackle-sensor-driven grasping and moving an arm with respect to inverse kinematics to achieve end-effector poses, the performs consistently across diverse situations allowing the user to attempt to use it for diverse and novel ways.
The below browser window shows the view through PR2’s cameras of the environment around the with superimposed augmented-reality elements. Clicking the yellow disc allows users to control the position of the arm.
The interface is based around a 1st person perspective, with a video feed streaming from the robot’s head camera. Augmented reality markers show 3D space controls, provide visual estimates of how the robot will move when commands are executed, and also provide feedback from nonvisual sensors, like tactile sensors and obstacle detection. One of the biggest challenges is how to adequately represent the 3D workspace of the robot through a 2D screen, but a 3D peek feature overlays a Kinect-based low-resolution 3D model of the environment around the robot’s gripper and then simulates a camera rotation. To keep the interface accessible to users with only a mouse and single-click control, there are many different operation modes that can be selected.
Originally posted 2019-04-19 10:18:17.
Subscribe to our email list and follow our social media pages for regular and timely updates.
You can submit your article for free review and publication by using “PUBLISH YOUR ARTICLE” page at the MENU Buttons.
If you love this post please share it to using the social media buttons provided before the comment form.