RNN生成音乐

计算图模型


1.    下载midi文件

 下载midi格式音乐的网站:freemidi.org


2.创建NoteSequences

即创建旋律数据库,将MIDI集合转化为NoteSequences。(NoteSequences是协议缓冲区,它是一种快速有效的数据格式,并且比MIDI文件更易于使用)

INPUT_DIRECTORY=/Users/mac/Desktop/MIDI

SEQUENCES_TFRECORD=/Users/mac/Desktop/train/notesequences.tfrecord

 

convert_dir_to_note_sequences\

  --input_dir=$INPUT_DIRECTORY \

  --output_file=$SEQUENCES_TFRECORD \

  --recursive

注意:路径中不能有空格

显示如下图则操作成功:


(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$INPUT_DIRECTORY=/Users/mac/Desktop/MIDI

(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$ # TFRecord file that will containNoteSequence protocol buffers.

(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$SEQUENCES_TFRECORD=/Users/mac/Desktop/train/notesequences.tfrecord

(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$

(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$ convert_dir_to_note_sequences \

>   --input_dir=$INPUT_DIRECTORY \

>   --output_file=$SEQUENCES_TFRECORD \

>   --recursive

INFO:tensorflow:Convertingfiles in '/Users/mac/Desktop/MIDI/'.

/Users/mac/miniconda2/envs/magenta/lib/python2.7/site-packages/pretty_midi/pretty_midi.py:100:RuntimeWarning: Tempo, Key or Time signature change events found on non-zerotracks. This is not a valid type 0 ortype 1 MIDI file.  Tempo, Key or TimeSignature may be wrong.

  RuntimeWarning)

INFO:tensorflow:Converted70 files in '/Users/mac/Desktop/MIDI/'.

INFO:tensorflow:Could not parse 0 files.

INFO:tensorflow:Wrote70 NoteSequence protos to '/Users/mac/Desktop/train/notesequences.tfrecord'

(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$

3.创建SequenceExamples

SequenceExamples被提供给模型用来训练和评估。每个SequenceExample包含一个输入序列和代表一个旋律的一系列标签。下面的命令是从NoteSequences中提取旋律,并将它们保存为SequenceExamples将生成两个SequenceExamples集合,一个用于训练,另一个用于评估,其中评估集中SequenceExamples的分数由-eval_ratio确定。评估率为0.10,提取的旋律的10%将保存在评估系统中,90%将被保存在训练集中。

 

代码:

melody_rnn_create_dataset\

--config=lookback_rnn\

--input=/Users/mac/Desktop/train/notesequences.tfrecord\

--output_dir=/Users/mac/Desktop/train/melody_rnn/sequence_examples\

--eval_ratio=0.10

代码运行后生成的文件


显示如下图则操作成功:

(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$ melody_rnn_create_dataset \

>--config=lookback_rnn \

>--input=/Users/mac/Desktop/train/notesequences.tfrecord \

>--output_dir=/Users/mac/Desktop/train/melody_rnn/sequence_examples \

>--eval_ratio=0.10

INFO:tensorflow:

 

Completed.

 

INFO:tensorflow:Processed70 inputs total. Produced 264 outputs.

INFO:tensorflow:DAGPipeline_MelodyExtractor_eval_melodies_discarded_too_few_pitches:4

INFO:tensorflow:DAGPipeline_MelodyExtractor_eval_melodies_discarded_too_long:0

INFO:tensorflow:DAGPipeline_MelodyExtractor_eval_melodies_discarded_too_short:40

INFO:tensorflow:DAGPipeline_MelodyExtractor_eval_melodies_truncated:2

INFO:tensorflow:DAGPipeline_MelodyExtractor_eval_melody_lengths_in_bars:

  [7,8): 1

  [8,10): 1

  [10,20): 6

  [20,30): 3

  [30,40): 1

INFO:tensorflow:DAGPipeline_MelodyExtractor_eval_polyphonic_tracks_discarded:34

INFO:tensorflow:DAGPipeline_MelodyExtractor_training_melodies_discarded_too_few_pitches:87

INFO:tensorflow:DAGPipeline_MelodyExtractor_training_melodies_discarded_too_long:0

INFO:tensorflow:DAGPipeline_MelodyExtractor_training_melodies_discarded_too_short:1001

INFO:tensorflow:DAGPipeline_MelodyExtractor_training_melodies_truncated:54

INFO:tensorflow:DAGPipeline_MelodyExtractor_training_melody_lengths_in_bars:

  [7,8): 38

  [8,10): 28

  [10,20): 92

  [20,30): 39

  [30,40): 52

  [40,50): 3

INFO:tensorflow:DAGPipeline_MelodyExtractor_training_polyphonic_tracks_discarded:744

INFO:tensorflow:DAGPipeline_RandomPartition_eval_melodies_count:4

INFO:tensorflow:DAGPipeline_RandomPartition_training_melodies_count:66

(magenta)zhu-Macs-MacBook-Pro:magenta zhujianing$

4.训练并测试模型

melody_rnn_train\

--config=lookback_rnn\

--run_dir=/Users/mac/Desktop/train/melody_rnn/logdir/run1\

--sequence_example_file=/Users/mac/Desktop/train/melody_rnn/sequence_examples/training_melodies.tfrecord\

--hparams="{'batch_size':64,'rnn_layer_sizes':[64,64]}"\

--num_training_steps=20000

5. 根据开头给定的旋律给出midi格式音乐

melody_rnn_generate\

--config=lookback_rnn\

--run_dir=/Users/mac/Desktop/train/melody_rnn/logdir/run1\

--output_dir=/Users/mac/Desktop/train/melody_rnn/generated\

--num_outputs=10\

--num_steps=512\

--hparams="{'batch_size':64,'rnn_layer_sizes':[64,64]}"\

--primer_midi=/Users/mac/Desktop/MIDI/21Guns.mid

播放midi文件方式

$ brew install timidity  

$timidity /Users/mac/Desktop/MIDI/21Guns\(1\).mid

结果如下图就可以成功播放mid文件了。



  • 1
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
好的,你需要用Python编写一个LSTM神经网络生成音乐的代码。这个任务需要涉及到数据预处理、模型建立、模型训练和音乐生成四个部分。以下是一个基本的模板,你可以根据自己的需求进行修改和补充。 ```python import numpy as np from music21 import midi from keras.models import Sequential from keras.layers import LSTM, Dense, Activation # 数据预处理 def get_data(data_path): midi_data = midi.MidiFile() midi_data.open(data_path) midi_data.read() midi_data.close() notes = [] for element in midi_data.flat: if isinstance(element, midi.Note): notes.append(str(element.pitch)) elif isinstance(element, midi.Chord): notes.append('.'.join(str(n) for n in element.normalOrder)) # 将音符转换成数字 note_to_int = dict((note, number) for number, note in enumerate(sorted(set(notes)))) input_size = len(set(notes)) sequence_length = 100 network_input = [] network_output = [] for i in range(0, len(notes) - sequence_length, 1): sequence_in = notes[i:i + sequence_length] sequence_out = notes[i + sequence_length] network_input.append([note_to_int[char] for char in sequence_in]) network_output.append(note_to_int[sequence_out]) n_patterns = len(network_input) # 将输入转化为LSTM需要的格式 X = np.reshape(network_input, (n_patterns, sequence_length, 1)) X = X / float(input_size) y = np_utils.to_categorical(network_output) return X, y, input_size # 模型建立 def create_model(X, input_size): model = Sequential() model.add(LSTM(512, input_shape=(X.shape[1], X.shape[2]), return_sequences=True)) model.add(Dropout(0.3)) model.add(LSTM(512)) model.add(Dense(256)) model.add(Dropout(0.3)) model.add(Dense(input_size)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop') return model # 模型训练 def train_model(model, X, y): model.fit(X, y, epochs=200, batch_size=128, verbose=1) # 音乐生成 def generate_music(model, network_input, input_size, output_length): int_to_note = dict((number, note) for number, note in enumerate(sorted(set(notes)))) # 随机选择一个网络输入 start = np.random.randint(0, len(network_input)-1) pattern = network_input[start] prediction_output = [] # 生成指定长度的音乐 for note_index in range(output_length): prediction_input = np.reshape(pattern, (1, len(pattern), 1)) prediction_input = prediction_input / float(input_size) prediction = model.predict(prediction_input, verbose=0) index = np.argmax(prediction) result = int_to_note[index] prediction_output.append(result) pattern.append(index) pattern = pattern[1:len(pattern)] return prediction_output # 主函数 def main(): data_path = 'data/music.mid' output_length = 500 X, y, input_size = get_data(data_path) model = create_model(X, input_size) train_model(model, X, y) network_input = X[np.random.randint(0, len(X)-1)] prediction_output = generate_music(model, network_input, input_size, output_length) if __name__ == '__main__': main() ``` 注意,在上面的代码中,使用了`music21`库来读取和处理MIDI文件,因此需要先安装该库。另外,如果你的电脑不够强大,可能需要调整模型的参数和训练次数以获得更好的效果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值