Mysterious areas in the work of artificial intelligence that scientists have been unable to explain
Day by day, people are becoming more and more dependent on artificial intelligence. With this increasing dependence, AI systems continue to evolve so rapidly that scientists are no longer able to
AI systems are becoming more complex and humans are less able to understand them
What is your favorite ice cream flavor? You might say vanilla or chocolate, and if you ask why, you'll probably say because it tastes good. But why would you describe it as good? And why do you still want to try other flavors sometimes? We rarely question the basic decisions we make in our daily lives, but if we do, we may realize that we can't pinpoint the exact causes of our preferences, emotions, and desires at any given moment.
Similarly, there is a similar problem with artificial intelligence . People who develop artificial intelligence are having increasing problems explaining how it works and determining why it gives certain, sometimes unexpected, results.
DNN-Deep neural networks, the many layers of data-processing systems that scientists have fed into it to mimic the neural networks of the human brain, often seem to reflect not only human intelligence but also an inability to explain how it is done. The functioning of the human brain.
!The black box.. the mysterious
Most artificial intelligence systems work in a way called "black box", which are systems that are exposed only in terms of their inputs and outputs, but scientists do not try to decipher this "black box", or the mysterious processes carried out by the system, as long as they receive the outputs they are looking for .
For example, if you provide AI data on each ice cream flavor, and demographic data on the economic, social, and lifestyle factors of millions of people, you can probably guess your favorite ice cream flavor or where you prefer to eat it and in which ice cream shop, even if it's not programmed for this purpose.
These types of AI systems are notorious for certain problems, because the data they are trained on is often inherently biased, mimicking the racial and gender biases that exist within our society. For example, AI often misidentifies black people disproportionately through recognition technology. on the face .
?How do I get rid of addiction to applications
Unable to explain!
Over time, these systems become difficult to repair in part because their developers often cannot fully explain how they work, which makes the issue difficult. As AI systems become more complex and humans become less able to understand them, AI experts and developer researchers are warning to step back and focus more on how and why AI produces certain results rather than knowing the fact that the system can produce them accurately and quickly.
Roman F. wrote. Yampolsky, a professor of computer science at the University of Louisville, in his paper titled “Inexplicability and Misunderstanding of Artificial Intelligence” says: “In addition, if we are used to accepting AI answers without explanation, and treating it primarily as an oracle, we will not be able to tell whether He would have started giving wrong or manipulative answers in the future." According to what was published by the "Vice" website.
It is reported that artificial intelligence systems have been used in self-driving cars, and chatbots for customer service and disease diagnosis, and they have the ability to perform some tasks better than humans.
For example, a machine is able to remember a trillion items, such as numbers, letters and words, versus humans, who remember an average of seven in their short-term memory, and is also able to process and compute information more quickly and at an improved rate than humans, but as processes evolve As AI models have become capable of, their developers' ability to explain how they work has diminished over time.
“If business leaders and data scientists don’t understand why and how AI calculates the output it does, that creates potential risks,” said Emily M. Bender, a professor of linguistics at the University of Washington, in a press statement. The lack of ability to explain how artificial intelligence works limits its potential value,” according to what was published by the technical website Motherboard.
The risk is that the AI system may make decisions using values we may not agree with, such as biased decisions (such as racism or sexism). Another risk is that the system could make a really bad decision, but we can't intervene because we don't understand its causes, said Jeff Clone, associate professor of computer science at the University of British Columbia.
IN PICTURES: THIS IS HOW ARTIFICIAL INTELLIGENCE INVADES OUR LIVES
Car makers are working to introduce new technology to prevent traffic accidents as a result of using a mobile phone or a quick nap, starting with assistance systems in smart cars that can stick to the lane or stop when needed. Smart cars notice their surroundings via cameras and scanners, and add to their algorithm after learning from real situations.
source:websites