目录
前言
正文
audio_jitter_buffer_max_packets
jitter_buffer_min_delay_ms
结论
WebRTC版本:m76
等待音频流收到后,增加一个处理的音频流:
剩余60%内容,订阅专栏后可继续查看/也可单篇购买
前言
正文
audio_jitter_buffer_max_packets
jitter_buffer_min_delay_ms
结论
前言
众多周知,WebRTC凭借自身非常完美的JitterBuffer控制机制能够适应各种网络抖动和异常情况,从而保证声音和画面的流畅播放。今天,我们就从WebRTC对音频的处理逻辑中一窥究竟。正文
大家都知道,实际的网络总是非常复杂的,存在各种丢包和抖动等异常情况,视频经常会遇到卡顿、花屏等问题。所以,在接收端对接收的数据做一个缓冲处理是非常有必要的。如果是内部专用网络,一般能够保证非常稳定的网络状态,在一定程度上保证没有丢包,没有抖动和延时,那么接收端不需要做其他的缓存处理逻辑,拿到音频和视频数据直接播放就好了,效果也一定不会差。所以,针对实际的网络情况,在WebRTC中自然就会存在专门的缓存处理模块,它就是我们今天要了解的JitterBuffer,今天我们就来看看它在WebRTC中的使用情况。WebRTC版本:m76
audio_jitter_buffer_max_packets
WebRTC有专门处理音频的引擎模块WebRtcVoiceEngine,它在初始化时,就设置了音频数据包的最大缓存buffer。void WebRtcVoiceEngine::Init() {JitterBuffer的控制单位有时间也有个数,比如配置参数 audio_jitter_buffer_max_packets 的单位是个数,默认值是200,代码:
RTC_DCHECK(worker_thread_checker_.IsCurrent());
RTC_LOG(LS_INFO) << "WebRtcVoiceEngine::Init";
// TaskQueue expects to be created/destroyed on the same thread.
low_priority_worker_queue_.reset(
new rtc::TaskQueue(task_queue_factory_->CreateTaskQueue(
"rtc-low-prio", webrtc::TaskQueueFactory::Priority::LOW)));
// Load our audio codec lists.
RTC_LOG(LS_INFO) << "Supported send codecs in order of preference:";
send_codecs_ = CollectCodecs(encoder_factory_->GetSupportedEncoders());
for (const AudioCodec& codec : send_codecs_) {
RTC_LOG(LS_INFO) << ToString(codec);
}
RTC_LOG(LS_INFO) << "Supported recv codecs in order of preference:";
recv_codecs_ = CollectCodecs(decoder_factory_->GetSupportedDecoders());
for (const AudioCodec& codec : recv_codecs_) {
RTC_LOG(LS_INFO) << ToString(codec);
}
// Connect the ADM to our audio path.所有的options设置都会在方法 ApplyOptions 调用后启用。
adm()->RegisterAudioCallback(audio_state()->audio_transport());
// Set default engine options.
{
AudioOptions options;
options.echo_cancellation = true;
options.auto_gain_control = true;
options.noise_suppression = true;
options.highpass_filter = true;
options.stereo_swapping = false;
options.audio_jitter_buffer_max_packets = 200;
options.audio_jitter_buffer_fast_accelerate = false;
options.audio_jitter_buffer_min_delay_ms = 0;
options.audio_jitter_buffer_enable_rtx_handling = false;
options.typing_detection = true;
options.experimental_agc = false;
options.extended_filter_aec = false;
options.delay_agnostic_aec = false;
options.experimental_ns = false;
options.residual_echo_detector = true;
bool error = ApplyOptions(options);
RTC_DCHECK(error);
}
initialized_ = true;
等待音频流收到后,增加一个处理的音频流:
void WebRtcVoiceMediaChannel::OnPacketReceived(rtc::CopyOnWriteBuffer packet,我们来看 AddRecvStream 方法,在其方法内部创建了一个 WebRtcAudioReceiveStream 对象实例用来接收音频数据,代码如下:
int64_t packet_time_us) {
RTC_DCHECK(worker_thread_checker_.IsCurrent());
webrtc::PacketReceiver::DeliveryStatus delivery_result =
call_->Receiver()->DeliverPacket(webrtc::MediaType::AUDIO, packet,
packet_time_us);
if (delivery_result != webrtc::PacketReceiver::DELIVERY_UNKNOWN_SSRC) {
return;
}
// Create an unsignaled receive stream for this previously not received ssrc.
// If there already is N unsignaled receive streams, delete the oldest.
// See: https://bugs.chromium.org/p/webrtc/issues/detail?id=5208
uint32_t ssrc = 0;
if (!GetRtpSsrc(packet.cdata(), packet.size(), &ssrc)) {
return;
}
RTC_DCHECK(!absl::c_linear_search(unsignaled_recv_ssrcs_, ssrc));
// Add new stream.
StreamParams sp = unsignaled_stream_params_;
sp.ssrcs.push_back(ssrc);
RTC_LOG(LS_INFO) << "Creating unsignaled receive stream for SSRC=" << ssrc;
if (!AddRecvStream(sp)) {
RTC_LOG(LS_WARNING) << "Could not create unsignaled receive stream.";
return;
}
// Create a new channel for receiving audio data.
recv_streams_.insert(std::make_pair(
ssrc, new WebRtcAudioReceiveStream(
ssrc, receiver_reports_ssrc_, recv_transport_cc_enabled_,
recv_nack_enabled_, sp.stream_ids(), recv_rtp_extensions_,
call_, this, media_transport_config(),
engine()->decoder_factory_, decoder_map_, codec_pair_id_,
engine()->audio_jitter_buffer_max_packets_,
eng