SoundTouchJS 使用教程

SoundTouchJS 使用教程

SoundTouchJSA JavaScript library for manipulating WebAudio Contexts, specifically for time-stretching a pitch change项目地址:https://gitcode.com/gh_mirrors/so/SoundTouchJS

项目介绍

SoundTouchJS 是一个基于ES2015编写的JavaScript库,专为WebAudio上下文设计,旨在实现音频的时间拉伸和音调改变功能。该库源自Olli Parviainen的C++实现 Soundtouch,并经历了Ryan Berdeen的早期JavaScript版本,之后由Jakub Faila扩展,最终由Steve "Cutter" Blades进一步发展成为易于分发的包,进行了ES2015标准的重构。它包括音频上下文处理工具,使开发者能够灵活地调整音频属性。

项目快速启动

安装

首先,你需要通过npm安装SoundTouchJS到你的项目中:

npm install soundtouchjs

使用示例

在你的项目中导入并使用SoundTouchJS来创建一个PitchShifter实例,以便对音频进行处理。以下是一个基本的使用流程:

import { PitchShifter } from 'soundtouchjs';
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const gainNode = audioCtx.createGain();

// 假设你已经有一个AudioBuffer(通过decodeAudioData或其他方式获得)
async function processAudio(audioUrl) {
    const response = await fetch(audioUrl);
    const arrayBuffer = await response.arrayBuffer();
    audioCtx.decodeAudioData(arrayBuffer, (audioBuffer) => {
        let shifter;
        
        if (shifter) {
            shifter.off(); // 如果存在旧实例,则先关闭
        }

        shifter = new PitchShifter(audioCtx, audioBuffer, 1024);
        shifter.on('play', () => {
            // 处理播放事件,例如连接到音频图
            shifter.connect(gainNode).connect(audioCtx.destination);
        });

        // 触发播放或其他操作
        // shifter.start();
    });
}

应用案例和最佳实践

SoundTouchJS广泛应用于动态调整音乐播放速率而不影响音高,或者相反地调整音高而不改变播放速度的场景。最佳实践中,开发者应该确保正确处理音频解码错误,并且利用库提供的事件管理机制来控制播放流,以达到流畅的用户体验。例如,在实时音频处理应用中,持续监听音频节点的状态并适配用户的交互行为是十分重要的。

典型生态项目

虽然具体的生态项目和集成案例并未直接列出,SoundTouchJS因其特性常被用于在线音乐编辑工具、游戏音频处理、以及实时通讯应用中的音频特效实现等场景。开发者可以在社区中寻找灵感,或者贡献自己的插件和案例至该项目的讨论区,分享给更广泛的开发者群体。


此教程提供了从安装到基础使用的基本指导。深入探索和高级应用可能需要参考项目文档和源码,以及实验不同的配置和策略来满足特定项目需求。

SoundTouchJSA JavaScript library for manipulating WebAudio Contexts, specifically for time-stretching a pitch change项目地址:https://gitcode.com/gh_mirrors/so/SoundTouchJS

Record sounds / noises around you and turn them into music. It’s a work in progress, at the moment it enables you to record live audio straight from your browser, edit it and save these sounds as a WAV file. There's also a sequencer part where you can create small loops using these sounds with a drone synth overlaid on them. See it working: http://daaain.github.com/JSSoundRecorder Technology ---------- No servers involved, only Web Audio API with binary sound Blobs passed around! ### Web Audio API #### GetUserMedia audio for live recording Experimental API to record any system audio input (including USB soundcards, musical instruments, etc). ```javascript // shim and create AudioContext window.AudioContext = window.AudioContext || window.webkitAudioContext || window.mozAudioContext; var audio_context = new AudioContext(); // shim and start GetUserMedia audio stream navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; navigator.getUserMedia({audio: true}, startUserMedia, function(e) { console.log('No live audio input: ' + e); }); ``` #### Audio nodes for routing You can route audio stream around, with input nodes (microphone, synths, etc), filters (volume / gain, equaliser, low pass, etc) and outputs (speakers, binary streams, etc). ```javascript function startUserMedia(stream) { // create MediaStreamSource and GainNode var input = audio_context.createMediaStreamSource(stream); var volume = audio_context.createGain(); volume.gain.value = 0.7; // connect them and pipe output input.connect(volume); volume.connect(audio_context.destination); // connect recorder as well - see below var recorder = new Recorder(input); } ``` ### WebWorker Processing (interleaving) record buffer is done in the background to not block the main thread and the UI. Also WAV conversion for export is also quite heavy for longer recordings, so best left to run in the background. ```javascript this.context = input.context; this.node = this.context.createScriptProcessor(4096, 2, 2); this.node.onaudioprocess = function(e){ worker.postMessage({ command: 'record', buffer: [ e.inputBuffer.getChannelData(0), e.inputBuffer.getChannelData(1) ] }); } ``` ```javascript function record(inputBuffer){ var bufferL = inputBuffer[0]; var bufferR = inputBuffer[1]; var interleaved = interleave(bufferL, bufferR); recBuffers.push(interleaved); recLength += interleaved.length; } function interleave(inputL, inputR){ var length = inputL.length + inputR.length; var result = new Float32Array(length); var index = 0, inputIndex = 0; while (index < length){ result[index++] = inputL[inputIndex]; result[index++] = inputR[inputIndex]; inputIndex++; } return result; } ``` ```javascript function encodeWAV(samples){ var buffer = new ArrayBuffer(44 + samples.length * 2); var view = new DataView(buffer); /* RIFF identifier */ writeString(view, 0, 'RIFF'); /* file length */ view.setUint32(4, 32 + samples.length * 2, true); /* RIFF type */ writeString(view, 8, 'WAVE'); /* format chunk identifier */ writeString(view, 12, 'fmt '); /* format chunk length */ view.setUint32(16, 16, true); /* sample format (raw) */ view.setUint16(20, 1, true); /* channel count */ view.setUint16(22, 2, true); /* sample rate */ view.setUint32(24, sampleRate, true); /* byte rate (sample rate * block align) */ view.setUint32(28, sampleRate * 4, true); /* block align (channel count * bytes per sample) */ view.setUint16(32, 4, true); /* bits per sample */ view.setUint16(34, 16, true); /* data chunk identifier */ writeString(view, 36, 'data'); /* data chunk length */ view.setUint32(40, samples.length * 2, true); floatTo16BitPCM(view, 44, samples); return view; } ``` ### Binary Blob Instead of file drag and drop interface this binary blob is passed to editor. Note: BlobBuilder deprecated (but a lot of examples use it), you should use Blob constructor instead! ```javascript var f = new FileReader(); f. { audio_context.decodeAudioData(e.target.result, function(buffer) { $('#audioLayerControl')[0].handleAudio(buffer); }, function(e) { console.warn(e); }); }; f.readAsArrayBuffer(blob); ``` ```javascript function exportWAV(type){ var buffer = mergeBuffers(recBuffers, recLength); var dataview = encodeWAV(buffer); var audioBlob = new Blob([dataview], { type: type }); this.postMessage(audioBlob); } ``` ### Virtual File – URL.createObjectURL You can create file download link pointing to WAV blob, but also set it as the source of an Audio element. ```javascript var url = URL.createObjectURL(blob); var audioElement = document.createElement('audio'); var downloadAnchor = document.createElement('a'); audioElement.controls = true; audioElement.src = url; downloadAnchor.href = url; ``` TODO ---- * Sequencer top / status row should be radio buttons :) * Code cleanup / restructuring * Enable open / drag and drop files for editing * Visual feedback (levels) for live recording * Sequencer UI (and separation to a different module) Credits / license ----------------- Live recording code adapted from: http://www.phpied.com/files/webaudio/record.html Editor code adapted from: https://github.com/plucked/html5-audio-editor Copyright (c) 2012 Daniel Demmel MIT License
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

薛曦旖Francesca

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值