Powerful Impacts Of Artificial Intelligence (AI) & AI Limitations
Artificial intelligence has done us better than any assumed evil. There is no doubt that the majority of the technologies today centrally operate using artificial intelligence.
From the industrial production line to construction jobs, assembling and warehousing, marketing and advertisement, and other vital areas of modern-day technology are predominantly automated using artificial intelligence technology.
The question still remains, “can machines think and act like humans?” An answer to that can be obtained from the “Dartmouth proposal” the proposal was printed after the Darmouth conference in 1956. It showed that every aspect of learning or any other feature of intelligence can be precisely described that a machine can be made to simulate it.
In other words, it is quite possible that every action of humans can be simulated by machines in a properly programmed manner.
Based on Newell and Simon’s physical symbol system hypothesis, “a physical symbol system has the necessary and sufficient means of general intelligent action.” Newell and Simon argue that intelligence consists of formal operations on symbols.
Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a “feel” for the situation rather than explicit symbolic knowledge.
The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner.
The general problem of simulating or creating intelligence has been broken into sub-problems.
Top 10 Impacts Of Artificial Intelligence (AI) & AI Limitations
These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention:
Though the early researchers developed algorithms that imitated step-by-step reasoning resembling that which humans use when they solve puzzles or make logical deductions.
Artificial intelligence in the year the 1980s and 1990s had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics based on its research.
However, the algorithms were not sufficient enough in handling complex and multi-task problems. It usually generates what the scientists call the “combinational explosion”.
The algorithms make the AI exponentially become slower and bulkier as the problems grow larger.
Thus, the idea for step-by-step deduction developed earlier could not solve or handle to expectations of modern science in terms of artificial intelligence technology, therefore, modern researchers try to use fast and intuitive judgment in making the modern algorithms used in AI.
Based on classical artificial intelligence research, some of the experts tried gathering explicit knowledge possessed by experts in some narrow domain.
In addition, some projects attempt to gather the commonsense knowledge known to the average person into a database containing extensive knowledge about the world.
The commonsense knowledge is used to contain the following; properties, categories, objects and relations between objects, situations, events, states and time, causes and effects, knowledge about knowledge (which means, what we know about what other people know), among other knowledge.
A representation of ‘what exists’ is an ontology; the set of objects, relations, concepts, and properties formally described so that software agents can interpret them.
The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.
The upper ontologies are the most general ontologies, it attempts to provide a foundation for all the other knowledge by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge.
Such can be used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery, and others.
AI makes it possible to set goals and achieve them. It uses some ways to visualize the future, a representative of the state of the world and able to make predictions about how their actions will change it and be able to make choices that maximize the utility of valuable choices.
During classical planning, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.
Meanwhile, the agent may reason under uncertainty if is not the only actor. Such will call for an agent that can not only assess its environment and make predictions but also evaluate its predictions and adapt based on its assessment.
In Multi-agent planning, the cooperation and competition of many agents are used to achieve a given goal. Emergent behavior can be used by the evolutionary algorithm and swarm intelligence.
The fundamental concept of AI is the study of computer algorithms that improve automatically through experience, and that had been improving since the inception of the field.
The ability to find patterns in a stream of input without requiring a human to label the inputs first is known as unsupervised learning. While supervised learning includes both classification and numerical regression, such requires a human to label the input data first.
Classification helps to determine the category each item belongs to after seeing a number of examples of things from several categories.
Regression is an attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.
Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown function. For instance, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”.
Computational learning theory may access learners by computational complexity, by sample complexity, or other notions of optimization.
In reinforcement, the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.
NATURAL LANGUAGE PROCESSING (NLP)
Artificial intelligence makes Natural language processing possible by giving machines the natural ability to read and understand human language.
A sufficiently powerful natural language processing system would enable natural natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts.
Some straightforward applications of natural language processing include information retrieval, text mining, question answering, and machine translation.
Machine perception is the ability to use input from sensors such as cameras, visible spectrum or infrared, microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors, to reduce aspects of the world. The applications include speech recognition, facial recognition, and object recognition.
Computer vision is the ability of artificial intelligence to analyze visual input. Such input is usually ambiguous, a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for instance using its “object model” to assess that fifty-meter pedestrian do not exist.
MOTION AND MANIPULATION
Artificial intelligence has made it possible to obtain several possible motions currently used in robotic engineering. The advanced robotic arms and other crucial robotic motions are now available in modern robots, making their application for industries, factories, banks, and other firms to be possible.
Modern robots can learn how to move efficiently despite the presence of friction and gear slippage.
When a mobile robot is given a small, static, and visible environment, it can easily determine its location and map its environment, however, dynamic environments such as the interior of a patient’s breathing body, pose a greater challenge.
Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are counterintuitive, and difficult to program into a robot, paradox is named after Hans Moravec, who stated in 1988 that
‘’ it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility’’.
The paradox can be extended to many forms of social intelligence. Distributed multi-agent coordination of autonomous vehicles remains a difficult problem. Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process, or simulate human affects.
Moderate successes related to affective computing include textual sentiment analysis and more recently, multimodal affect analysis, wherein AI classifies the affects displayed by a videotaped subject.
In the long run, social skills and understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions.
Though the early projects on possible general artificial intelligence failed due to underestimation and lack of necessary details. Modern AI researchers adopted work on tractable narrow approach applications instead of the broad approach used by the earlier researchers.
The modern approach aims to narrow artificial intelligence to different individual domains which will be incorporated into a machine with artificial general intelligence (AGI).
When the narrowest AI skills of various domains are made up to operate from a single machine, its intelligence capacity is believed to exceed that of human ability in most or all the areas incorporated into it.
Modern and future machines may be equipped with such cross-domain abilities.
For instance having a surgical robot ability, a warehousing robot ability, and a cooking robot ability, among others incorporated into a single robot, enables one to perform or act in various domains to enhance its significance.
An attempt for artificial general intelligence had been made by the ‘’Deepmind’’ in the year 2010, the robot was able to exercise various skills and could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.
Besides the transfer learning, hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic meta reasoning, and figuring out to ‘slurp up’ a comprehensive knowledge base from the entire unstructured web.
Based on arguments so far, there may not be any serious limitations to artificial intelligence in the future, rather many believed that AI could lead to artificial general intelligence (AGI) which stands as an extension of artificial intelligence.
The AGI on its own will continue to get advanced until a single machine can possibly perform all that humans could do.
Philip is a graduate of Mechanical engineering and an NDT inspector with vast practical knowledge in other engineering fields, and software.
He loves to write and share information relating to engineering and technology fields, science and environmental issues, and Technical posts. His posts are based on personal ideas, researched knowledge, and discovery, from engineering, science & investment fields, etc.
You can submit your article for free review and publication by using the “PUBLISH YOUR ARTICLE” page at the MENU Buttons.
If you love this post please share it with your friends using the social media buttons provided.