Giving you personal and shared information on science, technology, technical & Investments in Nigeria

W.E.F. Wants to Aid Firms Evade the Pitfalls associated with A.I. Deployment

World Economic Forum Wants to Aid Firms Evade the Pitfalls of Artificial Intelligence

The World Economic Forum, best known for its glitzy annual conference in Davos, Switzerland, wants to aid firms to evade the potential pitfalls that come with deploying artificial intelligence.

Yes, A.I. promises to radically change how enterprises function by opening the door to innovations like driverless vehicles and robots that care for the elderly. But it could also exacerbate inequalities in society and lead to widespread job loss.

The WEF’s solution: A set of guidelines for corporate boards that spells out how firms have to apply A.I. responsibly.

“We found a lot of boards didn’t really understand A.I., and they were asked to make decisions concerning implementing A.I. in firms without any tools to do so,” Kay Firth-Butterfield, the WEF’s machine learning chief, told Fortune.

Advertisement

The WEF wants its so-called A.I. toolkit to answer questions like how firms have to best implement A.I. in their businesses. The tip sheet will also highlight the importance for enterprises to produce A.I. ethics councils to monitor their use of A.I. and the public relations black eye and customer backlash firms face if they screw up.

Butterfield hopes the guidelines will aid board members to understand a whole set of questions they need to be able to ask and get answers to.”

Advertisement

She and her team announced plans for the A.I. guidelines in January during the Davos summit. Since then, they have gathered feedback from firms and A.I. experts to finish the job.

The WEF plans to release a public version of its A.I. guidelines at next year’s Davos conference. The next step will be to start work on a similar A.I. tip sheet for firm executives.

“The C-suite said, what concerning us?” Butterfield joked.

Previously, the WEF had made a big push to explain to firms the nuances of cloud computing, another hot technology that gained traction a few years ago. A.I., however, “is slightly more interesting,” Butterfield said.

One example of the technology’s potential downside, she said, involves hiring software that is supposed to speed up the recruitment process. If trained using a firm’s previous hiring data, it may exacerbate gender or racial bias by only highlighting white males as the best candidates.

“If you don’t think concerning bias issues, those could have negative effects on your business,” Butterfield said.

Hiring A.I. talent in Canada may be easier. Tech firms are having an easier time hiring highly-skilled workers in Canada than in the U.S. because of Canada’s more lax immigration policies, Time Magazine reported. One A.I. startup, Finn.AI, for example, “considered locating their fresh firm in Silicon Valley, but ultimately chose Vancouver because they knew they would qualify for a start-up visa there, and that they would be able to quickly hire AI experts from around the world.”

Autism and A.I. Firms like Credit Suisse, Dell Technologies, and Microsoft have established “neurodiversity” programs that involve hiring people with autism for A.I.-related jobs, under the belief that “Autistic workers are often hyper-focused, highly analytical thinkers with an exceptional proficiency for technology,” The Wall Street Journal reported. The newspaper said that many of the autistic workers “are capable of working long hours on repetitive AI tasks, such as labeling photos and videos for computer-vision systems, without losing interest.”

A.I. as a creator. The Financial Times reported on a “landmark challenge to the international patents regime” involving an A.I. system that created two designs, one of a “food container capable of changing shape” and the other of “a flashlight system.” The article explores the confusion within the legal community in establishing the devices’ creator, which the patent application attributes to “Dabus,” the computer system that developed the designs.

Sharing faces. Law enforcement agencies in California have “have the capability to run facial recognition searches on each other’s’ mug shot databases,” tech publication OneZero said. The article explains that tech firm DataWorks Plus and its image-sharing service “puts the firm in a powerful position in the nation’s largest state.”

AVOIDING FOOL’S GOLD

Advertisement

Patrick Riley, a principal engineer for Google’s accelerated science team, wrote an article in Nature concerning three pitfalls data scientists should evade in machine learning. As Riley explains: “machine-learning tools have to also turn up fool’s gold — false positives, blind alleys, and mistakes. Many of the algorithms are so complicated that it is impossible to inspect all the parameters or to reason concerning exactly how the inputs have been manipulated.”

EYE ON A.I. HIRES

JPMorgan Chase hired Subhashini Tripuraneni as its executive director of machine learning. Tripuraneni was previously the head of artificial intelligence for -Eleven.

Government consulting firm Simple Technology Solutions hired Subhasis Datta to be its chief data scientist and practice lead for data science, machine learning, and artificial intelligence. Datta was previously the chief data scientist for federal consulting firm Analytica.

Eye on A.I. research

Adversarial healthcare A.I. Scientists at Ghent University in Belgium and Ghent University Global Campus in South Korea published a paper concerning using deep learning to produce so-called adversarial examples that trick technology for recognizing ailments like breast cancer and eye disorders in medical imaging data. In one example, the scientists showed how their A.I. techniques covertly manipulated a photo showing breast cancer so that a medical-imaging system classified the photo as “healthy.”

A review of global A.I. ethics. Scientists from consulting firm Dovetail Labs and Princeton University published a paper investigating the ethics of A.I. as it pertains to different countries and regions worldwide. One of the paper’s findings touched on how “people from low-and-middle-income countries are likely to be radically underrepresented in the datasets central to developing AI systems.”

Fortune on A.I.

Your Job Will Be Automated—Here’s How to Figure Out When A.I. Could Take Over – By Gwen Moran

DeepMind’s Current A.I. Predicts Kidney Injuries Hours in Advance – By Jeremy Kahn

How Intel Hopes to Catch Rivals with Its Current Chips – By Aaron Pressman

BRAIN FOOD

Investigation of Healthcare. Smithsonian magazine explored the current state of artificial intelligence in healthcare. Despite high hopes that A.I. has to improve healthcare, there are still potential problems worth considering. For instance, the Smithsonian notes that “if A.I. services make cost-saving recommendations, human physicians and health care organizations may hesitate to take A.I. advice if they make less money as a result.” One of the most promising but less exciting ways A.I. have to aid doctors, the article also explains, is by automatically entering patient data into electronic health records, a burdensome task for physicians and “the main factor behind physical and emotional burnout.”

Advertisement
Advertisement
Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
Advertisement

Leave a Reply

Your email address will not be published.

Advertisement
WP2Social Auto Publish Powered By : XYZScripts.com
error: Content is protected !!