Technology

Top 5 limitations of artificial intelligence

AI can do today or in future is a big question everyone is talking about. But we are discussing, what are the limitations of artificial intelligence. Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with

In but a decade, computers became extremely good at diagnosing diseases, translating languages, and transcribing speech. they will outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.

Yet despite these impressive achievements, AI has glaring weaknesses.

Machine-learning systems are often duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a person’s driver could handle easily. An AI system laboriously trained to hold out one task (identifying cats, say) has got to be taught everywhere again to try to do something else (identifying dogs). within the process, it’s susceptible to losing a number of the expertise it had within the original task. Computer scientists call this problem “catastrophic forgetting.”

These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are related to other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

Understanding cause and effect may be a big aspect of what we call sense, and it’s a neighbourhood during which AI systems today “are clueless,” says Elias Barenboim. He should know: because the director of the new Causal AI Lab at Columbia University, he’s at the forefront of efforts to repair this problem.

His idea is to infuse artificial intelligence research with insights from the relatively new science of causality, a field shaped to an enormous extent by Judea Pearl, a Turing Award-winning scholar who considers Bareinboim his protégé.

As Barenboim and Pearl describe it, AI’s ability to identify correlations—e.g., that clouds make rain more likely—is merely the only level of causal reasoning. It’s ok to possess driven the boom within the AI technique referred to as deep learning over the past decade. Given an excellent deal of knowledge about familiar situations, this method can cause excellent predictions. A computer can calculate the probability that a patient with certain symptoms features a certain disease because it’s learned just how often thousands or maybe many people with equivalent symptoms had that disease.

But there’s a growing consensus that progress in AI will stall if computers don’t recover at wrestling with causation. If machines could grasp that certain things cause other things, they wouldn’t need to learn everything anew all the time—they could take what they had learned in one domain and apply it to a different one. And if machines could use them since we’d be ready to put more trust in them to require actions on their own, knowing that they aren’t likely to form dumb errors.

Today’s AI has only a limited ability to infer what is going to result from a given action. In reinforcement learning, a way that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t add messier settings within the world. It doesn’t even leave a machine with a general understanding of how it’d play other games.

An even higher level of causal thinking would be the power to reason about why things happened and ask “what if” questions. A patient dies during a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? this type of reasoning is way beyond the present capability of AI.

Also Read: Top 10 new technology last 10 years

Performing miracles will be impossible for AI

limitations of artificial intelligence
Top 5 limitations of artificial intelligence

The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to us in 2008, after he completed a master’s in computing at the Federal University of Rio de Janeiro. He jumped at a chance to review under Judea Pearl, a scientist and statistician at UCLA. Pearl, 83, may be a giant—the giant—of causal inference, and his career helps illustrate why it’s hard to make AI that understands causality.

Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err within the other way, hesitating to call out causation even when it’s justified. within the 1950s, for instance, a couple of prominent statisticians muddied the waters around whether tobacco-caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, nobody could rule out the likelihood that some unknown—stress, perhaps, or some gene—caused people both to smoke and to urge carcinoma.

Eventually, the very fact that smoking causes cancer was definitively established, but it needn’t have taken goodbye. Since then, Pearl and other statisticians have devised a mathematical approach to identifying the facts required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and carcinoma, an independent factor causing both would be extremely unlikely.

Conversely, Pearl’s formulas also help identify when correlations can’t be wont to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Planck Institute for Intelligent Systems, points out that you simply can predict a country’s birth rate if you recognize its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development results in more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.

Pearl’s work has also led to the event of causal Bayesian networks—software that sifts through large amounts of knowledge to detect which variables appear to possess the foremost influence on other variables. for instance, GNS Healthcare, a corporation in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

In one project, GNS worked with researchers who study myeloma, a sort of blood cancer. The researchers wanted to understand why some patients with the disease live longer than others after getting stem-cell transplants, a standard sort of treatment. The software churned through data with 30,000 variables and pointed to a couple that seemed especially likely to be causal. Biostatisticians and experts within the disease zeroed in on one in particular: the extent of a particular protein in patients’ bodies. Researchers could then run a targeted clinical test to ascertain whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there within the lab,” says GNS cofounder Iya Khalil.

Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without excessive worry about causation. Barenboim is functioning to require a subsequent step: making computers more useful tools for human causal explorations.

Read Also: Top 10 Future Technology 2050, that will Change living on Earth

Pearl says AI can’t be truly intelligent until it’s an upscale understanding of cause and effect, which might enable the introspection that’s at the core of cognition.

One of his systems, which remains in beta, can help scientists determine whether or not they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans undergo menopause (we are the sole apes that do).

The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the tutorial or medical outcomes of youngsters who have lived with grandparents and people who haven’t. There are what statisticians call confounding factors: grandmothers could be likelier to measure with grandchildren who need the foremost help. Barenboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and will be valuable in answering his causal query. “It’s an enormous breakthrough,” McElreath says.

The last mile of AI

What AI still can’t do
Top 5 limitations of artificial intelligence

Barenboim talks fast and sometimes gestures with two hands in the air as if he’s trying to balance two sides of a mental equation. it had been halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it seemed like a detail from a cartoon a few mad professors.

He shrugged off the provisional state of the space, saying he had been very busy giving talks about each side of the causal revolution. Barenboim believes work like his offers the chance not just to include causal thinking in machines, but also to enhance it in humans.

Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers during a wide selection of disciplines, from biology to public policy, are sometimes content to unearth correlations that aren’t rooted in causal relationships. as an example, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is ok and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, referred to as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of these inferences,” says Barenboim. “We’re flipping results every few years.”

He argues that anyone asking “what if”—medical researchers fixing clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to work out whether the available data could answer a causal hypothesis. Eventually, he envisions this resulting in “automated scientist” software: a person could think up a causal question to travel after, and therefore the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. which may save scientists from an enormous number of costly dead ends.

Barenboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after an interview he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They try to ascertain where things will lead, supported by their current understanding.”

That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a couple of variables in their minds directly. A computer, on the opposite hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and ready to calculate what might happen with new sets of variables, an automatic scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to figure only in Texas might be made to figure in California if a couple of causally relevant factors were better appreciated. Scientists would not be “doing experiments within the darkness,” Barenboim said.

He also doesn’t think it’s that far off: “This is the walk before the victory.”

Also Read: Top 10 recent technological innovations that Have Bright Future

What if AI?

Top 5 limitations of artificial intelligence 1
Top 5 limitations of artificial intelligence

Finishing that mile will probably require techniques that are just starting to be developed. for instance, Yoshua Bengio, a scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to urge neural networks—the software at the guts of deep learning—to do “meta-learning” and see the causes of things.

As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to spot when people are running, you’d show it many, many images of runners. The system would learn to differentiate runners from dancers by identifying features that tend to vary within the images, like the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the planet is often gleaned by analyzing the items that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and lots of others that show people only a couple of feet off the bottom, a machine would eventually understand something about gravity and the way it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that might be reusable in many domains.

For his part, Pearl says AI can’t be truly intelligent until it’s an upscale understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for a man-made general intelligence, it’s necessary, he says, because it might enable the introspection that’s at the core of cognition. “What if” questions “are the building blocks of science, of ethical attitudes, of discretion, of consciousness,” Pearl told me.

You can’t draw Pearl into predicting how long it’ll deem computers to urge powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the primary movement should be to develop machine-learning tools that combine data with available scientific knowledge: “We have tons of data that reside within the human skull which isn’t utilized.”

Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button