BLOG

ARTIFICIAL INTELLIGENCE IN 2021 AI what to expect

ARTIFICIAL INTELLIGENCE IN 2021

The Best AI Stocks to Buy for 2021 and Beyond | Kiplinger

 

 

 

In 2021, we will see more AI solutions that can detect and remediate common IT problems on their own.

These solutions will self-correct and self-heal any malfunctions or issues in a proactive way, reducing the downtime of a system or critical application.
The 2010s were huge for artificial intelligence, thanks to advances in deep learning, a branch of AI that has become feasible because of the growing capacity to collect, store, and process large amounts of data. Today, deep learning is not just a topic of scientific research but also a key component of many everyday applications.

But a decade’s worth of research and application has made it clear that in its current state, deep learning is not the final solution to solving the ever-elusive challenge of creating human-level AI.

hybrid artificial intelligence

Cognitive scientist Gary Marcus, who cohosted the debate, reiterated some of the key shortcomings of deep learning, including excessive data requirements, low capacity for transferring knowledge to other domains, opacity, and a lack of reasoning and knowledge representation.

Marcus, who is an outspoken critic of deep learning–only approaches, published a paper in early 2020 in which he suggested a hybrid approach that combines learning algorithms with rules-based software.

Other speakers also pointed to hybrid artificial intelligence as a possible solution to the challenges deep learning faces.

“One of the key questions is to identify the building blocks of AI and how to make AI more trustworthy, explainable, and interpretable,” computer scientist Luis Lamb said.

Lamb, who is a coauthor of the book Neural-symbolic Cognitive Reasoning, proposed a foundational approach for neural-symbolic AI that is based on both logical formalization and machine learning.

“We use logic and knowledge representation to represent the reasoning process that [it] is integrated with machine learning systems so that we can also effectively reform neural learning using deep learning machinery,” Lamb said.

Reinforcement learning
Computer scientist Richard Sutton pointed out that, for the most part, work on AI lacks a “computational theory,” a term coined by neuroscientist David Marr, who is renowned for his work on vision. Computational theory defines what goal an information processing system seeks and why it seeks that goal.

“In neuroscience, we are missing a high-level understanding of the goal and the purposes of the overall mind. It is also true in artificial intelligence — perhaps more surprisingly in AI. There’s very little computational theory in Marr’s sense in AI,” Sutton said. Sutton added that textbooks often define AI simply as “getting machines to do what people do” and most current conversations in AI, including the debate between neural networks and symbolic systems, are “about how you achieve something, as if we understood already what it is we are trying to do.”

“Reinforcement learning is the first computational theory of intelligence,” Sutton said, referring to the branch of AI in which agents are given the basic rules of an environment and left to discover ways to maximize their reward. “Reinforcement learning is explicit about the goal, about the whats and the whys. In reinforcement learning, the goal is to maximize an arbitrary reward signal. To this end, the agent has to compute a policy, a value function, and a generative model,” Sutton said.

He added that the field needs to further develop an agreed-upon computational theory of intelligence and said that reinforcement learning is currently the standout candidate, though he acknowledged that other candidates might be worth exploring.

Sutton is a pioneer of reinforcement learning and author of a seminal textbook on the topic. DeepMind, the AI lab where he works, is deeply invested in “deep reinforcement learning,” a variation of the technique that integrates neural networks into basic reinforcement learning techniques. In recent years, DeepMind has used deep reinforcement learning to master games such as Go, chess, and StarCraft 2.

Integrating world knowledge and common sense into AI
Computer scientist and Turing Award winner Judea Pearl, best known for his work on Bayesian networks and causal inference, stressed that AI systems need world knowledge and common sense to make the most efficient use of the data they are fed.

“I believe we should build systems which have a combination of knowledge of the world together with data,” Pearl said, adding that AI systems based only on amassing and blindly processing large volumes of data are doomed to fail.

Knowledge does not emerge from data, Pearl said. Instead, we employ the innate structures in our brains to interact with the world, and we use data to interrogate and learn from the world, as witnessed in newborns, who learn many things without being explicitly instructed.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top