Generative Artificial Intelligence Musicians of the New Digital Age
After imitation and synthesis, artificial intelligence jumped to the rank of musician (Shutterstock)
Complementing the achievements of robot scientists and the chip industry
The robot played music, the electronic chips installed melodic groups, and then the artificial intelligence chanted a song of its own making, and it was not late in jumping to the platform of the musician who composes pure music similar to human classics.
Did the above words describe the quick and hastily, of course, inappropriately, evolutionary trajectory of the relationship between music and artificial intelligence ? Most likely, it only succeeded in casting a flash on a vast maze intertwined with corridors that may remain unknown for an unknown time.
Let's take a small step back.
Six years ago, in June 2017, the public media reported two stories that only appeared a few days apart. The first carried the good news that artificial intelligence had succeeded in composing music for the first time. This came after a series of achievements in previous years, which witnessed giving robots the ability to play various musical instruments such as the piano, in addition to assuming the leadership of the entire orchestra in playing a classical symphony.
According to the details of that achievement, the "International Research Institute" in Belgium was able to manufacture electronic chips capable of composing short musical pieces. And those chips have rehearsed the sound of the old Belgian flute. After much patience and repetition, those chips managed to create a short piece of music with the sound of that flute.
In the second piece of news, it was found that researchers from the Georgia Institute of Technology in America were able to train a four-armed robot to compose compositions that he played himself on the wooden marimba instrument, which is similar to the famous xylophone, but it is huge in size and requires large sticks. Relatively beating on its wooden boards to get the resonant melodies out of it. In a remarkable detail, these researchers taught the robot's chips about 5,000 songs, including symphonies by Beethoven, songs by the Beatles, and a vocal performance by Lady Gaga, among others. As a result, the artificial intelligence in the robot created two pieces of thirty seconds each, which he played himself on the marimba.
?Who sang in Los Angeles
In early 2023, singer Yun Wang sang the song "Duke of New York" on a stage in Los Angeles, but what the audience heard was her voice after being reconstructed by artificial intelligence. This performer described her experience as an experience of collaboration and integration with the performance of intelligent machines. The experiment used an intelligent automated technology called RAVE, which stands for Realtime Audio Variational AutoEncoder, according to the New York Times.
The "Rive" technique provides a model for working on deep learning of smart machines, by means of advanced algorithms, on a special type of output, which was represented, in this case, by vocal singing performance. The “Rave” technology was invented in 2021 by French informatics expert Antoine Cayon, in the context of research he conducted at the French “IRCAM” institute specializing in sound and music research, according to the “New York Times”.
In a comment to her reported by that same newspaper, American professor Tina Talon mentioned that informatics specialists used various techniques of artificial intelligence to reach different and gradual interventions in the human voice, starting in the sixties of the twentieth century. Professor Talon pointed out that this means that huge amounts of data and experiments were available to the “Rave” technology, which enabled her to perform the vocal performance that accompanied the voice of singer Wang.
The newspaper also pointed out that artificial intelligence experts, including Frenchman Kayon, have been developing a set of systems specialized in sound and music, similar to the "SingSong" and "MusicLM" programs that were supervised by Google. These and other developments paved the way for the emergence of the WavTool platform, which is based on the GPT-4 system (made by OpenAI) to install a technical system specialized in composing music pieces.
In that context, Professor Talon explained that artificial intelligence programs specializing in music, such as "MusicLM", are also based on deep learning training for smart machines , using a database that includes thousands of hours of music playing drawn from sites such as "YouTube". and various music digital platforms and other sources.
Two intelligences, one music
In the same context, according to the "New York Times", the specialist Talon did not fail to point out the rush of the wave of traditional artificial intelligence, accompanied by a spree in making new songs performed with the voices of famous singers, in addition to the installation of pure musical pieces that imitate the songs made by human musicians, but do not match them. Including classical music.
In the same context, The New York Times reported other diverse experiences, including a symphonic work called "Silicon" in which generative artificial intelligence technology was used to create innovative musical combinations that were added to human-made compositions, based on a huge database in music. Thus, the British musician Robert Laidlaw, who led the "Silicon" experiment, did not hesitate to point out that this work "is as much about the technology of artificial intelligence as it is about the use of that technology itself."
At that point, a wide and vague field opens up to questions about the mechanisms, modalities, limits and controls in that overlap between human and artificial intelligence in music. It is likely that these questions, the reasons for asking them, and how to approach thinking about them, need extensive discussions.
Source: websites