Generative AI visual forgery fools its detection software
Generative AI visual forgery fools its detection software 1809
A fake picture of a kiss between Elon Musk and a female robot that was not detected by specialized technological tools (Twitter)
There is no protection for the public from it, with the failure of technology companies to devise prevention tools
Since the explosive launch of ChatGBT on November 30, 2022, massive and profound warnings about generative AI have been rolling in like restless ocean waves.
It is possible to talk about countless things, but the real and wide dangers that that intelligence carries require nothing less than a global multidimensional alert about it. Disaster warnings did not come from backward societies or systems of thinking that are hostile to change, progress and technologies, but rather came from the makers of that artificial intelligence and those who accompany the big picture of technology and its dimensions and effects. Perhaps it is useful to say that those responsible and warning voices also called in various forms for positive and critical dealings with generative intelligence in order to positively control it. It is more likely that this thread connects warnings and visions of different tones emanating from tech investor Elon Musk and Google founder Eric Schmidt, Google's chief reproductive intelligence advisor Dr. Geoffrey Hinton and Microsoft founder Bill Gates (he became the chief investor in the company that made GBT). ) and the European Union and even the White House.
Generative AI visual forgery fools its detection software 1-1745
Weeks after he met senior officials of advanced information and communications companies at the White House in April 2023, Biden attended a conference in San Francisco of leading experts and academics specializing in artificial intelligence. That meeting came out with a consensus on the seriousness, depth, and universality of the dangers that generative artificial intelligence poses to contemporary human societies, and therefore the need for multidimensional and global synergistic action to find solutions that guarantee control of that intelligence and keep it as a brilliant tool in the service of human minds and the development of human societies.
A story of declared failure
In light of this image imbued with fear and caution, more than one voice met in the aforementioned contexts on the need for companies that manufacture generative artificial intelligence to assume their responsibility, especially in creating tools that enable the public to ward off the dangers of that intelligence. But it seems that the beginning is only a story of declared failure.
Among the prominent dangers of generative intelligence is the issue of its use in deep falsification techniques , especially for images, videos, and others.
Generative AI visual forgery fools its detection software 1--808
According to a summary of a lengthy article that recently appeared in the "New York Times", the technical tools put into use by technology companies drag the tails of disappointment and defeat in front of the deep falsification of obstetric intelligence, and the bitterness of that disappointment increases that the influential images are stuck in the eyes and visual memory, leaving intertwined interactions in them even if The minds knew that what the eyes saw was nothing but the art of manipulation and forgery.
The New York Times provided a well-known set of examples of this, as Elon Musk did not kiss a female robot, and Pope Francis did not wear the "Balenciaga" scarf known in rainbow colors in reference to people with non-traditional sexual orientations, and the US police did not arrest Donald Trump as a criminal caught by an officer in the form of Joe Biden, and so on.
On the other hand, those pictures and tapes spread like wildfire, roaming the world and raining down on hundreds of millions of screens all over the globe.
Generative AI visual forgery fools its detection software 1-1746
What you see is not the image of an attractive girl, but a fake workmanship with obstetric intelligence that may elude technological tools in its detection (Midjourney)
Humans have nothing but a critical mind
The New York Times has focused on this type of scene and others. The newspaper tested five basic technological tools in deep fake detection made by technical companies specializing in that field of dealing with machine intelligence. On these technological tools, the newspaper showed more than 100 images superimposed by well-known generative AI applications that create images, paintings and videos, such as Midjourney, Stable Diffusion and DALL.E.
The newspaper also provided these same technological tools with a set of real photos, some of which were taken by photographers working for that newspaper. The results showed that these technological tools that are manufactured and presented by technology companies as means of detecting deep fakes may improve, but they are always subject to failure.
Generative AI visual forgery fools its detection software 1--809
A disappointing example of frustrating failure is the fake image of a mutual kiss between Elon Musk and a robot in the form of a female made by the well-known “Medjourney” application that specializes in artificial intelligence installation, and although most people realized its falsity, it confused tools such as “Hive” and “ Illuminarty, Sensity, AI or Not, Umm-maybe's AI Art Detector, and others.
Most of these technological gadgets were deceived by the image of an elderly nun with a slight smile on her face, and they were made by an art and informatics enthusiast using generative artificial intelligence.
The same description of failure applied to the ability of the technology companies' tools to detect the forgery of the famous videotape of the explosion near the Pentagon, which roamed the earth in seconds and caused the stock market to hit for minutes, during which a lot of money was lost.

In short, we are probably still in the early passages of the book of the relationship between the human mind and generative artificial intelligence. As for the prevention of misuse of machine intelligence in deep falsification, it is more likely that the public is still semi-exposed and defenseless about it, and it must rely on raising the degree of systematic suspicion and critical thinking . Will the situation change in the near future for the worse or for the better? Let's wait and see.



Source: websites