欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

HTML5录音实践总结(Preact)

程序员文章站 2022-04-18 10:53:12
这篇文章主要介绍了HTML5录音实践总结,本文通过实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友可以参考下... 20-05-07...

获取 pcm 数据

处理 pcm 数据

float32int16

arraybufferbase64

pcm 文件播放

重采样

pcmmp3

pcmwav

短时能量计算

web worker优化性能

音频存储(indexeddb)

webview 开启 webrtc

获取 pcm 数据

查看 demo

HTML5录音实践总结(Preact)

样例代码:

const mediastream = await window.navigator.mediadevices.getusermedia({
    audio: {
		// samplerate: 44100, // 采样率 不生效需要手动重采样
        channelcount: 1, // 声道
        // echocancellation: true,
        // noisesuppression: true, // 降噪 实测效果不错
    },
})
const audiocontext = new window.audiocontext()
const inputsamplerate = audiocontext.samplerate
const medianode = audiocontext.createmediastreamsource(mediastream)

if (!audiocontext.createscriptprocessor) {
	audiocontext.createscriptprocessor = audiocontext.createjavascriptnode
}
// 创建一个jsnode
const jsnode = audiocontext.createscriptprocessor(4096, 1, 1)
jsnode.connect(audiocontext.destination)
jsnode.onaudioprocess = (e) => {
    // e.inputbuffer.getchanneldata(0) (left)
    // 双通道通过e.inputbuffer.getchanneldata(1)获取 (right)
}
medianode.connect(jsnode)

简要流程如下:

start=>start: 开始
getusermedia=>operation: 获取mediastream
audiocontext=>operation: 创建audiocontext
scriptnode=>operation: 创建scriptnode并关联audiocontext
onaudioprocess=>operation: 设置onaudioprocess并处理数据
end=>end: 结束

start->getusermedia->audiocontext->scriptnode->onaudioprocess->end

停止录制只需要把 audiocontext 挂在的 node 卸载即可,然后把存储的每一帧数据合并即可产出 pcm 数据

jsnode.disconnect()
medianode.disconnect()
jsnode.onaudioprocess = null

pcm 数据处理

通过 webrtc 获取的 pcm 数据格式是 float32 的, 如果是双通道录音的话, 还需要增加合并通道

const leftdatalist = [];
const rightdatalist = [];
function onaudioprocess(event) {
  // 一帧的音频pcm数据
  let audiobuffer = event.inputbuffer;
  leftdatalist.push(audiobuffer.getchanneldata(0).slice(0));
  rightdatalist.push(audiobuffer.getchanneldata(1).slice(0));
}

// 交叉合并左右声道的数据
function interleaveleftandright(left, right) {
  let totallength = left.length + right.length;
  let data = new float32array(totallength);
  for (let i = 0; i < left.length; i++) {
    let k = i * 2;
    data[k] = left[i];
    data[k + 1] = right[i];
  }
  return data;
}

float32 转 int16

const float32 = new float32array(1)
const int16 = int16array.from(
	float32.map(x => (x > 0 ? x * 0x7fff : x * 0x8000)),
)

arraybuffer 转 base64

注意: 在浏览器上有个 btoa() 函数也是可以转换为 base64 但是输入参数必须为字符串, 如果传递 buffer 参数会先被 tostring() 然后再 base64 , 使用 ffplay 播放反序列化的 base64 , 会比较刺耳

使用 base64-arraybuffer 即可完成

import { encode } from 'base64-arraybuffer'

const float32 = new float32array(1)
const int16 = int16array.from(
	float32.map(x => (x > 0 ? x * 0x7fff : x * 0x8000)),
)
console.log(encode(int16.buffer))

验证 base64 是否正确, 可以在 node 下把产出的 base64 转换为 int16 的 pcm 文件, 然后使用 ffplay 播放, 看看音频是否正常播放

pcm 文件播放

# 单通道 采样率:16000 int16
ffplay -f s16le -ar 16k -ac 1 test.pcm

# 双通道 采样率:48000 float32
ffplay -f f32le -ar 48000 -ac 2 test.pcm

重采样/调整采样率

虽然 getusermedia 参数可设置采样率, 但是在最新chrome也不生效, 所以需要手动做个重采样

const mediastream = await window.navigator.mediadevices.getusermedia({
    audio: {
    	// samplerate: 44100, // 采样率 设置不生效
        channelcount: 1, // 声道
        // echocancellation: true, // 减低回音
        // noisesuppression: true, // 降噪, 实测效果不错
    },
})

使用 wave-resampler 即可完成

import { resample } from 'wave-resampler'

const inputsamplerate =  44100
const outputsamplerate = 16000
const resampledbuffers = resample(
    // 需要onaudioprocess每一帧的buffer合并后的数组
	mergearray(audiobuffers),
	inputsamplerate,
	outputsamplerate,
)

pcm 转 mp3

import { mp3encoder } from 'lamejs'

let mp3buf
const mp3data = []
const sampleblocksize = 576 * 10 // 工作缓存区, 576的倍数
const mp3encoder = new mp3encoder(1, outputsamplerate, kbps)
const samples = float32toint16(
  audiobuffers,
  inputsamplerate,
  outputsamplerate,
)

let remaining = samples.length
for (let i = 0; remaining >= 0; i += sampleblocksize) {
  const left = samples.subarray(i, i + sampleblocksize)
  mp3buf = mp3encoder.encodebuffer(left)
  mp3data.push(new int8array(mp3buf))
  remaining -= sampleblocksize
}

mp3data.push(new int8array(mp3encoder.flush()))
console.log(mp3data)

// 工具函数
function float32toint16(audiobuffers, inputsamplerate, outputsamplerate) {
  const float32 = resample(
    // 需要onaudioprocess每一帧的buffer合并后的数组
    mergearray(audiobuffers),
    inputsamplerate,
    outputsamplerate,
  )
  const int16 = int16array.from(
    float32.map(x => (x > 0 ? x * 0x7fff : x * 0x8000)),
  )
  return int16
}

使用 lamejs 即可, 但是体积较大(160+kb), 如果没有存储需求可使用 wav 格式

> ls -alh
-rwxrwxrwx 1 root root  95k  4月 22 12:45 12s.mp3*
-rwxrwxrwx 1 root root 1.1m  4月 22 12:44 12s.wav*
-rwxrwxrwx 1 root root 235k  4月 22 12:41 30s.mp3*
-rwxrwxrwx 1 root root 2.6m  4月 22 12:40 30s.wav*
-rwxrwxrwx 1 root root  63k  4月 22 12:49 8s.mp3*
-rwxrwxrwx 1 root root 689k  4月 22 12:48 8s.wav*

pcm 转 wav

function mergearray(list) {
  const length = list.length * list[0].length
  const data = new float32array(length)
  let offset = 0
  for (let i = 0; i < list.length; i++) {
    data.set(list[i], offset)
    offset += list[i].length
  }
  return data
}

function writeutfbytes(view, offset, string) {
  var lng = string.length
  for (let i = 0; i < lng; i++) {
    view.setuint8(offset + i, string.charcodeat(i))
  }
}

function createwavbuffer(audiodata, samplerate = 44100, channels = 1) {
  const wav_head_size = 44
  const buffer = new arraybuffer(audiodata.length * 2 + wav_head_size)
  // 需要用一个view来操控buffer
  const view = new dataview(buffer)
  // 写入wav头部信息
  // riff chunk descriptor/identifier
  writeutfbytes(view, 0, 'riff')
  // riff chunk length
  view.setuint32(4, 44 + audiodata.length * 2, true)
  // riff type
  writeutfbytes(view, 8, 'wave')
  // format chunk identifier
  // fmt sub-chunk
  writeutfbytes(view, 12, 'fmt')
  // format chunk length
  view.setuint32(16, 16, true)
  // sample format (raw)
  view.setuint16(20, 1, true)
  // stereo (2 channels)
  view.setuint16(22, channels, true)
  // sample rate
  view.setuint32(24, samplerate, true)
  // byte rate (sample rate * block align)
  view.setuint32(28, samplerate * 2, true)
  // block align (channel count * bytes per sample)
  view.setuint16(32, channels * 2, true)
  // bits per sample
  view.setuint16(34, 16, true)
  // data sub-chunk
  // data chunk identifier
  writeutfbytes(view, 36, 'data')
  // data chunk length
  view.setuint32(40, audiodata.length * 2, true)

  // 写入pcm数据
  let index = 44
  const volume = 1
  const { length } = audiodata
  for (let i = 0; i < length; i++) {
    view.setint16(index, audiodata[i] * (0x7fff * volume), true)
    index += 2
  }
  return buffer
}

// 需要onaudioprocess每一帧的buffer合并后的数组
createwavbuffer(mergearray(audiobuffers))

wav 基本上是 pcm 加上一些音频信息

简单的短时能量计算

function shorttimeenergy(audiodata) {
  let sum = 0
  const energy = []
  const { length } = audiodata
  for (let i = 0; i < length; i++) {
    sum += audiodata[i] ** 2

    if ((i + 1) % 256 === 0) {
      energy.push(sum)
      sum = 0
    } else if (i === length - 1) {
      energy.push(sum)
    }
  }
  return energy
}

由于计算结果有会因设备的录音增益差异较大, 计算出数据也较大, 所以使用比值简单区分人声和噪音

查看 demo

const noisevoicewatershedwave = 2.3
const energy = shorttimeenergy(e.inputbuffer.getchanneldata(0).slice(0))
const avg = energy.reduce((a, b) => a + b) / energy.length

const nextstate = math.max(...energy) / avg > noisevoicewatershedwave ? 'voice' : 'noise'

web worker 优化性能

音频数据数据量较大, 所以可以使用 web worker 进行优化, 不卡 ui 线程

在 webpack 项目里 web worker 比较简单, 安装 worker-loader 即可

preact.config.js

export default (config, env, helpers) => {
    config.module.rules.push({
        test: /\.worker\.js$/,
        use: { loader: 'worker-loader', options: { inline: true } },
      })
}

recorder.worker.js

self.addeventlistener('message', event => {
  console.log(event.data)
  // 转mp3/转base64/转wav等等
  const output = ''
  self.postmessage(output)
}

使用 worker

async function tomp3(audiobuffers, inputsamplerate, outputsamplerate = 16000) {
  const { default: worker } = await import('./recorder.worker')
  const worker = new worker()
  // 简单使用, 项目可以在recorder实例化的时候创建worker实例, 有并法需求可多个实例

  return new promise(resolve => {
    worker.postmessage({
      audiobuffers: audiobuffers,
      inputsamplerate: inputsamplerate,
      outputsamplerate: outputsamplerate,
      type: 'mp3',
    })
    worker.onmessage = event => resolve(event.data)
  })
}

音频的存储

浏览器持久化储存的地方有 localstorage 和 indexeddb , 其中 localstorage 较为常用, 但是只能储存字符串, 而 indexeddb 可直接储存 blob , 所以优先选择 indexeddb ,使用 localstorage 则需要转 base64 体积将会更大

所以为了避免占用用户太多空间, 所以选择mp3格式进行存储

> ls -alh
-rwxrwxrwx 1 root root  95k  4月 22 12:45 12s.mp3*
-rwxrwxrwx 1 root root 1.1m  4月 22 12:44 12s.wav*
-rwxrwxrwx 1 root root 235k  4月 22 12:41 30s.mp3*
-rwxrwxrwx 1 root root 2.6m  4月 22 12:40 30s.wav*
-rwxrwxrwx 1 root root  63k  4月 22 12:49 8s.mp3*
-rwxrwxrwx 1 root root 689k  4月 22 12:48 8s.wav*

indexeddb 简单封装如下, 熟悉后台的同学可以找个 orm 库方便数据读写

const indexeddb =
  window.indexeddb ||
  window.webkitindexeddb ||
  window.mozindexeddb ||
  window.oindexeddb ||
  window.msindexeddb

const idbtransaction =
  window.idbtransaction ||
  window.webkitidbtransaction ||
  window.oidbtransaction ||
  window.msidbtransaction

const readwritemode =
  typeof idbtransaction.read_write === 'undefined'
    ? 'readwrite'
    : idbtransaction.read_write

const dbversion = 1
const storedefault = 'mp3'

let dblink

function initdb(store) {
  return new promise((resolve, reject) => {
    if (dblink) resolve(dblink)

    // create/open database
    const request = indexeddb.open('audio', dbversion)

    request.onsuccess = event => {
      const db = request.result

      db.onerror = event => {
        reject(event)
      }

      if (db.version === dbversion) resolve(db)
    }

    request.onerror = event => {
      reject(event)
    }

    // for future use. currently only in latest firefox versions
    request.onupgradeneeded = event => {
      dblink = event.target.result
      const { transaction } = event.target

      if (!dblink.objectstorenames.contains(store)) {
        dblink.createobjectstore(store)
      }

      transaction.oncomplete = event => {
        // now store is available to be populated
        resolve(dblink)
      }
    }
  })
}

export const writeidb = async (name, blob, store = storedefault) => {
  const db = await initdb(store)

  const transaction = db.transaction([store], readwritemode)
  const objstore = transaction.objectstore(store)

  return new promise((resolve, reject) => {
    const request = objstore.put(blob, name)
    request.onsuccess = event => resolve(event)
    request.onerror = event => reject(event)
    transaction.commit && transaction.commit()
  })
}

export const readidb = async (name, store = storedefault) => {
  const db = await initdb(store)

  const transaction = db.transaction([store], readwritemode)
  const objstore = transaction.objectstore(store)

  return new promise((resolve, reject) => {
    const request = objstore.get(name)
    request.onsuccess = event => resolve(event.target.result)
    request.onerror = event => reject(event)
    transaction.commit && transaction.commit()
  })
}

export const clearidb = async (store = storedefault) => {
  const db = await initdb(store)

  const transaction = db.transaction([store], readwritemode)
  const objstore = transaction.objectstore(store)
  return new promise((resolve, reject) => {
    const request = objstore.clear()
    request.onsuccess = event => resolve(event)
    request.onerror = event => reject(event)
    transaction.commit && transaction.commit()
  })
}

webview 开启 webrtc

webview webrtc not working

webview.setwebchromeclient(new webchromeclient(){
	@targetapi(build.version_codes.lollipop)
	@override
	public void onpermissionrequest(final permissionrequest request) {
		request.grant(request.getresources());
	}
});

到此这篇关于html5录音实践总结(preact)的文章就介绍到这了,更多相关html5录音内容请搜索以前的文章或继续浏览下面的相关文章,希望大家以后多多支持!

相关标签: HTML5 录音