Artificial Intelligence rather quickly is slowly becoming a
mainstream tool for content creators. Recently released OpenAI’s GPT-3 is
generating a lot of buzz in the natural language processing community. The
package contains a much improved and much larger language model promising compelling
text generation capabilities. There is no question that the time when humans
communicate with computers using spoken or written English or other languages
is rapidly approaching.
Composing music with
AI models
The situation in the music production business is similar,
although the market forces did not push developers as hard as in the financial
or e-commerce sectors of the economy. Markov model-based algorithms are
composing music for decades now. Still, the new era of sophisticated music
production tolls like DAWs, virtual and hardware synthesizers, effects
solutions, creates a rich platform for AI algorithms.
Most of us consume music through streaming applications. We
create massive data sets of musical preferences by our clicks, ratings,
frequency of play, duration, and many other explicit and implicit feedback
mechanisms. For machine learning algorithms, this is a perfect task, i.e., look
at the patterns in the audio files, compare those with user feedback datasets
and create models that will tell us what kind of music to produce. Just imagine
that your Karma-driven algorithmic sequencer in your music workstation communicates
with a machine learning model created from millions of user ratings. The model
does not have to replace a talented musician in the creative processes, but it
will tell you right away how well the song will sell.
Making the song sound
great in various circumstances
The creative music-making process is not the only one that
will be affected by the AI. In the past, CD players, amplifiers, and big
speakers were the way to go when it comes to high-quality listening systems.
Today, the majority of music is streamed to multiple devices. Many digital
streaming services are lossy, meaning that the quality of sound is radically
downgraded to ensure smooth downloads. Also, many of us spend pennies on cheap
headphones that tend to break as frequently as the expensive ones. It means that the tone, volume or dynamics, and
other characteristics of records might be very far from those recorded in the
studio.
There are dedicated software packages that utilize AI to
ensure that radically distorted streamed records still sound excellent and
consistent in most circumstances. Typically, this is when the mastering process
needs an experienced sound engineer. However, more and more artists decide to
accomplish this costly task on their own using an increasing range of
intelligent software choices.
This problem is very similar to the one faced by software
engineers that make to ensure that their code executes well on all mobile,
desktop, console, and cloud devices. It appears that AI will take over music
sooner or later. Perhaps, we will completely automate the tedious parts of
building musical records to the point that creativity and imagination will be
the only things required. Decades of learning music theory and mastering piano
or violin will not be necessary anymore. Sadly, the creative process can be
modeled by machine learning quite effectively.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.