Artificial intelligence won’t be brilliant if computers don’t grasp cause and effect. That raised a question about tasks AI still cannot do because that’s something even humans have trouble with.
Computers have become extremely intelligent at diagnosing diseases, translating languages, and transcribing speech in less than a decade. They can outplay humans at complicated artificial intelligence and games strategy, create photorealistic images, and suggest valuable replies to your emails. Yet despite these impressive achievements, artificial intelligence has glaring weaknesses and some tasks AI cannot do.
Catastrophic Forgetting in AI:
Here’s a fact, machine learning systems can be misguided by situations they haven’t seen before. A self-driving car gets confused by a scenario that a human driver could handle easily. However, an AI system is arduously trained to carry out a simple task like identifying cats; right, this can be one of the tasks ai cannot do, it has to be taught repeatedly to do something else like wash dishes.
The above process is the liability for losing some of its expertise in the original task. This problem is identified as “Catastrophic Forgetting” by computer scientists.
These shortcomings exist because AI systems can’t understand causation. They will probably see some events are associated with others, but they can’t understand what things directly make other things happen. For instance, giving you an example, you know that the presence of clouds made rain likelier, but what we don’t understand is how clouds cause rain.
Thinking the Unthinkable:
There’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If a machine could hold the intelligence about certain things that leads to other things, they wouldn’t have to learn everything anew all the time.
If the above statement is possible, these AI machines could learn in one domain and apply it to another, which actually counts in the tasks AI cannot do still.
The cherry on top, if machines could use common sense, humans can trust more in them to take good actions on their own, knowing they aren’t likely to make dump errors simultaneously.
Ability to Master an Algorithm:
In today’s world, there is a question of what tasks AI cannot do? Acknowledging the fact that AI has only a limited ability to conclude the results of a given action. Reinforcement learning is a technique that allows machines to understand the algorithms of games like chess, cards, scrabble, etc.
This happens when a system uses extensive trial and error to discern which moves will cause them to win. But this approach doesn’t work in the real world because, in a competition, there are lots of possibilities to win a game. Despite these circumstances, a machine will never understand how it might play other games.
Here is a fun fact about the intelligence level is the ability to reason about why things happened and ask possible scenarios like “what if” questions like;
- Suppose hospital losses a patient while in a clinical trial; was it the fault of the experimental medicine or something else?
- School test scores are falling; what policy change would most improve them?
- Calculating game scores spontaneously, Does it can take the guarantee of sudden changes in the calculation process?
These questions totally make sense, because we are here dealing with AI technology or it is dealing with us, can’t tell the exact difference.
Moreover, applying this kind of reasoning is far beyond the current capability of artificial intelligence. If AI problems and machines achieve these fundamental goals in the near future, there will be better results to solve a problem, detect new methods to improve medical research and fill a void.