Generative AI in four basic questions
Generative AI in four basic questions 12151
The linguistic model of generative AI has been able to create texts from its own obscure creation even to its creators (Noroflex.net)
Let's start with the easy questions, perhaps leading to their deeper counterparts, about generative AI.Why did this technology in the interaction of machines with humanity raise all this unfamiliar noise since the launch of the chat bot model “Chat GPT”, followed by “GPT”, by Sam Altman, Director of “Open ai” company? Why have the tones of criticism of this technology reached so high levels of warning, even catastrophic apprehension? Isn't it remarkable that this cautionary critical thinking came from major technology makers and those dealing with it? Is it not worth paying attention to the extent to which the institutions of the international system have adopted the high warning criticism regarding this technology, and it has reached the point of drafting a code of conduct regarding it by the European Union, and the United Nations launching an international initiative to reach coordinated international action regarding it, and the White House’s involvement in successive meetings with senior leaders Advanced information and communication companies, which sponsor the making and development of generative intelligence, to come up with a vision of ways to prevent its risks?
Before seeking out pathways that may, or may not, lead to answers, it is worth starting with an initial question: What is generative AI? What's new in it that sparked all that fuss? How it works? Is he really thinking, or is it, according to the machine learning experts at Facebook, neither a fundamental ? change nor a technical revolution

First of all, it is clear that tracking these questions is something that an article, not even a large group of them, realizes, but there is nothing wrong with trying to shed some flash lights on certain features in that technology that raised confusions and fears, in addition to optimism for the continued development of the relationship between human intelligence and its counterpart in machines.
?Generative intelligence? what does that mean
Most likely, the word Generative constitutes the difference between the development and the risks borne by the new technology. The term artificial intelligence has been circulating in reference to machine intelligence since the works of the famous scientist Alan Turing (1912-1954) towards the middle of the twentieth century. Since then, Turing has predicted that machine intelligence will evolve to "simulate" human intelligence, but the difference between the two will continue. Turing demanded that humans be prepared to accept that intelligence. The irony is that he set an exam that was considered a decisive test, to test the similarity between the two intelligences. For decades, machine intelligence could not really pass the Turing Test. The paradox is that the generative intelligence was able to pass that criterion, which means that it moved to the stage of development towards imitation and "similarity" with human intelligence.
?What enabled the machines to do this
Consider the name of the new technology, GPT, which stands for Generative Pre Trained Transformer. The word generation, in a relatively bad shorthand, refers to something like a paraphrase of texts, and this includes creating a new text that summarizes and reduces them and preserves their style, arrangement, and connections. It is an old and well-established linguistic field in education at all levels.
Generative AI in four basic questions 11983
In the West, the foundations of a linguistic theory of "paraphrasing" were laid in a book of that title written by the scholar Desiderius Erasmus in the year 1548. This linguistic faculty was included in school and university education, and its methods, methods of making, and mechanisms of practice increased in depth. The description applies to the modern teaching of languages in most countries.
Does this mean that "GPT" represents a traditional artificial intelligence to which the ability to paraphrase texts has been added, with what it includes of summarizing and shorthand? No, if we want a semi-sufficient answer, and yes in a relative way as well, if we want to develop a general, practical and broad perception of it.
Before continuing, the focus will be on generative AI related to language because it formed the basic structure, rather the model in technical expression, which is now being developed to form other models.
Generative AI in four basic questions 1-1333
?How do machines learn language sciences
Returning to the language, it is not only the refactoring that made the difference for GPT, but it came at a time when machine learning in the field of language has developed into an advanced stage, which will be explained in a few lines.
If machines began to practice preparing linguistic texts, after they had been “transferred” to them human knowledge and methods of dealing with them.
What does it mean that language sciences have been "transferred" to intelligent machines? There is a whole science in the field of digital computing called "Computer Linguistics", which deals with most of that path.
Generative AI in four basic questions 1-1357
With excessive brevity, it is possible to point out that language sciences have developed in the West, especially with the crystallization of what is referred to as the term "semiotic".
In short, semiotics includes three branches of language sciences: pragmatics, semantic, and signaling or syntax, which means the structure of signs in language. Pragmatics deals with the way language is used in societies, including grammar, syntax, and morphology, and how the meanings of words evolve over time, as well as how sentences acquire meaning through social and temporal use. Semantics studies the meaning of words in sentences, in the sense of studying the relationship of different linguistic forms with mental meanings, rather mental representations. Thus, semantic sciences make it possible to understand how phrases are understood for the owners of a particular language, and this has evolved over time.
Syntax deals with language as pure symbols, arranged according to formulas that have specifications similar to the way numbers form numbers, arithmetic, and other types of mathematics.

Unfortunately, there is no such line of development in the Arabic language. Among the few books in Arabic on these modern linguistic sciences, Twain Van Dyke's book "Text and Context, Investigation of Research in Semantic and Pragmatic Discourse", was translated by colleague Abdul Qadir Qanini in 2000.
For various reasons, that development in language sciences in the West since the eighties and nineties of the twentieth century focused on formulating semiotics sciences in clear mathematical equations. And because electronic machines and digital computing operate by mathematical equations, described by the term Algorithms, computer scientists have been able to give these machines various capabilities in dealing with language. The previous lines may have provided an explanation for the reason for the frequent repetition of the term algorithms in the discussion of artificial intelligence, especially the generative one.
Generative AI in four basic questions 1-393
An image merging the characters of Superman and Batman from the synthesis of generative artificial intelligence (Midjourney)
? How are large and basic language models made
So, smart machines “use” modern language sciences, semiotics with its pragmatic, semantic and indicative branches, in dealing with linguistic texts, starting from the single word, up to sentences and syllables and their controls in grammar, grammar and morphology, as well as signaling structures that present language directly as abstract signs similar to numbers. Pure.
What does it mean to say that intelligent machines "use" modern linguistics? In addition to the fact that these sciences formulate language in mathematical equations / algorithms, they benefit from the development of two technical approaches that emerged in the context of machine learning , which forms the basic heart of advanced artificial intelligence. The two technical methods are embodied in the advancement of formulas of mathematical equations that describe the relationships and links between different things, as well as the algorithms that enable pattern recognition, that is, to capture what is repeated, similar, converging, and occurring in a specific sequence, and then based on it to perform tasks.
Generative AI in four basic questions 1--642
Thanks to the huge memory of the computers and the increasing power of the electronic chips, the scientists trained the computers on the texts of the language, using algorithms related to the recognition of the pattern and the capture of the associations, based on the three sciences of semiotics.
Based on these linguistic techniques and sciences, as well as other tools such as paraphrasing, scientists have also trained machines to make templates, structures, or proto-representations, so that they can make their own texts, rather paraphrasing, shortening, and condensing texts found in billions of books, publications, and publications, in order to It generates templates on which it is based when giving answers to the questions that are asked of it.
This whole previous description gives an explanation of the meaning of making Large Language Models, which is considered the basic structure of the "GBT" model, which does not become a large basic language model unless it is trained to deal with the daily language in force in humans. It is likely that it has become familiar to the public for years to see the smart devices in their hands, which complete sentences and give them options for words that are frequently used within a specific context, such as expressions of congratulations on holidays, reassurance of health, or compliments on public and private occasions, and others.
Generative AI in four basic questions 1685
To come up with the big basic model making, generative AI prototypes are trained on dialogues, writings, questions, responses, and even phone calls after being transcribed into scripts. Audience writing on social media platforms has been used prominently in training intelligent machines and making large basic language models.
It is pleasant to remember that the use of the public’s writings, blogs and tweets, without the slightest permission, sparked debates that are still going on.
In order to complete the above description, after making into large basic language models, these models are placed on computers that are super powerful in computing, in the sense of huge memory and advanced chips in the speed of making calculations and dealing with algorithms, so that large models can be made . Making basic big models requires the computational minds that train and teach the machines, as well as powerful supercomputers. Each major base model costs more than $1 billion. On the other hand, making basic models requires a few tens of millions of dollars, and the presence of advanced and powerful computers, but without the need for human minds working on training machines, which are few in the world, and a huge struggle is taking place over them.
Generative AI in four basic questions 1-1692
References in politics and technology issued a group of warnings of major risks associated with generative artificial intelligence (Daly Lab)
?What goes on in the darkness of the machines
Another feature, and one of the main concerns about generative AI, is that it is not known how that intelligence actually works, nor how it evolves, how it develops itself.
From the previous description, it is clear that generative artificial intelligence does not know the meanings and content in the texts themselves, but deals with them through techniques such as recognition of the format. In other words, it is a technical tool that has the computing power known to computers, in addition to the capabilities derived from computer linguistics and language science algorithms. After that, no one knows what is going on. For example, machines are trained to make templates and representations, called "pretrained representations," but it is not known how they will be used after they become part of it. To what extent is the description given by Tecnológico de Monterrey about "Chat GBT" that it resembles a huge parrot repeating what it does not understand? Does it generate new templates and representations? Do you transfer representations and structures of poetic and literary texts, for example, from one language to another? Does it translate for itself from one language to another, formulating templates in German, for example, based on structures in English? How do the largest models of artificial intelligence deal with the current information that is pouring into them over the Internet around the clock? What is the effect of this on the correctness of her answers, especially since she does not understand or understand the meanings she is actually dealing with? There is a consensus of technical experts onLack of awareness phenomenon in artificial intelligence .
Generative AI in four basic questions 1-554
Perhaps this is an introduction to understanding why generative artificial intelligence makes mistakes and gives illogical and incorrect answers about many things. Rather, it seems as if it is assembling what is similar to the pattern of the question presented to it, simply because they are things that were repeated with a high percentage of the texts that that intelligence referred to. For example, generative artificial intelligence was asked to give six titles of Henry Kissinger's books on technology. He gave five wrong titles that might be recurring sentences in Kissinger's writings. He gave a correct title but attached it to the wrong date, perhaps because that correct title was repeated so often in writings about Kissinger's view of technology. .
Scientists call these fabricated answers "hallucinations", and some even liken them to a parrot repeating texts with the confidence of a confident scientist.
And it came to the point that a scientific article described the lack of knowledge of the makers of generative artificial intelligence about what is going on in its corridors, as a baffling puzzle that may never find its way to a solution, but rather considered it the biggest challenge facing generations of humanity, present and in the future. The authors of that article were none other than Eric Schmidt, CEO of Google for 11 years and then headed Alphabet between 2011 and 2017; and the famous politician Henry Kissinger, and Professor Daniel Hooten Luscher, Professor of Artificial Intelligence at the Massachusetts Institute of Technology. The trio are the authors of The Age of AI: And Our Human Future.

It is difficult not to sound a strong whistle in the Arab countries regarding this stage of artificial intelligence. If the Arabs do not engage in making a major and basic linguistic model for their language, they will open a major gap separating them from the rhythm of time, and it will become another knowledge gap, in addition to many similar things. Arabs have a lot in traditional sciences about language, but they have less than a little in modern sciences, especially semiotics with its three branches. This challenge is likely to be an essential part of any positive Arab response to the era of generative AI.



Source: websites