Scientists are unable to explain the existence of mysterious areas in the work of artificial intelligence
What is your favorite ice cream flavour? You might say vanilla or chocolate, and if you ask why, you'll probably say because it tastes good. But why does it taste good? And why do you still want to try other flavors sometimes? We rarely question the basic decisions we make in our daily lives, but if we do, we may realize that we cannot pinpoint the exact reasons for our preferences, emotions, and desires at any given moment.
Similarly, there is a similar problem with artificial intelligence. People who develop artificial intelligence are increasingly having problems explaining how it works and why it gives certain, sometimes unexpected, results.
DNN-Deep neural networks — made up of multiple layers of data-processing systems that scientists have fed into it to mimic the neural networks of the human brain — often seem to reflect not only human intelligence but also the inability to explain how it is done. The functioning of the human brain.
!The black box.. the mysterious
Most artificial intelligence systems work in a way called a "black box", which is a system that is exposed only in terms of its inputs and outputs, but scientists do not try to decipher this "black box", or the mysterious operations carried out by the system, as long as they receive the outputs that they are looking for .
For example, if you provide data to AI about each ice cream flavor, and demographic data about the economic, social and lifestyle factors of millions of people, it will likely guess your favorite ice cream flavor or where you prefer to eat it and in which ice cream shop, even if it is not programmed for this purpose.
These types of AI systems are notorious for certain problems, because the data they are trained on is often biased in nature, mimicking the racial and gender biases that exist within our society. For example, AI is often misidentified as black people disproportionately through recognition technology. on the face.
?How do I get rid of application addiction
!Can't explain
Over time these systems become difficult to fix in part because their developers often can't fully explain how they work, which makes the issue difficult. As AI systems become more complex and humans become less able to understand them, AI experts and researchers are warning developers to step back and focus more on how and why AI produces certain outcomes rather than the fact that the system can produce them accurately and quickly.
Written by Roman F. Yampolsky, a professor of computer science at the University of Louisville, in his paper titled “The Inexplicability and Incomprehensibility of AI” says: “In addition, if we become accustomed to accepting AI answers without explanation, essentially treating it like an oracle, we will not be able to know whether He would start giving wrong or manipulative answers in the future.” According to what was published by “Vice”.
It is noteworthy that artificial intelligence systems have been used in self-driving cars, chatbots for customer service, and disease diagnosis, and they also have the ability to perform some tasks better than humans.
For example, a machine is able to remember a trillion items, such as numbers, letters, and words, versus humans, who remember an average of seven in their short-term memory. AI models are capable of it, and their developers' ability to explain how it works has declined over time.
"If business leaders and data scientists don't understand why and how AI calculates the output it does, it creates potential risks," said Emily M. Bender, a professor of linguistics at the University of Washington, in a press release. The lack of an ability to explain how artificial intelligence works limits its potential value.
The risk is that the AI system may make decisions using values we may not agree with, such as biased decisions (such as racism or sexism). Another risk is that the system might make a very bad decision, but we can't get involved because we don't understand its reasons, said Jeff Clune, assistant professor of computer science at the University of British Columbia.
Source: websites