目录

前言

正文

    OpenSLESPlayer

    AAudioPlayer

    AudioTrackJni


前言

在Android系统中,WebRTC播放音频的方式有三种,分别是OpenSLESPlayer、AAudioPlayer、AudioTrackJni。今天就来简单介绍一下。

《WebRTC工作原理精讲》系列-总览 

正文

OpenSLESPlayer

OpenSLESPlayer 是使用基于C语言的OpenSL ES API实现了对16位单声道PCM音频输出的支持。这个过程与java层的逻辑解耦,没有使用JNI技术。

类声明:

class OpenSLESPlayer { public: // Beginning with API level 17 (Android 4.2), a buffer count of 2 or more is // required for lower latency. Beginning with API level 18 (Android 4.3), a // buffer count of 1 is sufficient for lower latency. In addition, the buffer // size and sample rate must be compatible with the device's native output // configuration provided via the audio manager at construction. // TODO(henrika): perhaps set this value dynamically based on OS version. static const int kNumOfOpenSLESBuffers = 2; explicit OpenSLESPlayer(AudioManager* audio_manager);
  ~OpenSLESPlayer(); int Init(); int Terminate(); int InitPlayout(); bool PlayoutIsInitialized() const { return initialized_; } int StartPlayout(); int StopPlayout(); bool Playing() const { return playing_; }

  。。。。。。

}

AAudioPlayer

AAudioPlayer 是使用基于C语言的AAudio API实现对低延迟的16位单声道PCM音频输出的支持。

类声明:

class AAudioPlayer final : public AAudioObserverInterface, public rtc::MessageHandler { public: explicit AAudioPlayer(AudioManager* audio_manager);
  ~AAudioPlayer(); int Init(); int Terminate(); int InitPlayout(); bool PlayoutIsInitialized() const; int StartPlayout(); int StopPlayout(); bool Playing() const; void AttachAudioBuffer(AudioDeviceBuffer* audioBuffer);

  。。。。。。

}

AudioTrackJni

AudioTrackJni 是使用Java AudioTrack接口实现对16位单声道PCM音频输出的支持。大部分工作是由它的Java对应方在WebRtcAudioTrack.java。这个类是在C++线程中创建并管理的,但解码的音频缓冲区的管理是由优先级更高的java类管理的。

类声明:

class AudioTrackJni { public: // Wraps the Java specific parts of the AudioTrackJni into one helper class. class JavaAudioTrack { public: JavaAudioTrack(NativeRegistration* native_registration,
                   std::unique_ptr<GlobalRef> audio_track);
    ~JavaAudioTrack(); bool InitPlayout(int sample_rate, int channels); bool StartPlayout(); bool StopPlayout(); bool SetStreamVolume(int volume); int GetStreamMaxVolume(); int GetStreamVolume(); private:
    std::unique_ptr<GlobalRef> audio_track_;
    jmethodID init_playout_;
    jmethodID start_playout_;
    jmethodID stop_playout_;
    jmethodID set_stream_volume_;
    jmethodID get_stream_max_volume_;
    jmethodID get_stream_volume_;
  }; explicit AudioTrackJni(AudioManager* audio_manager);
  ~AudioTrackJni(); int32_t Init(); int32_t Terminate(); int32_t InitPlayout(); bool PlayoutIsInitialized() const { return initialized_; } int32_t StartPlayout(); int32_t StopPlayout(); bool Playing() const { return playing_; }

  。。。。。

}

这三个类具体怎么使用取决于下面的逻辑:

int32_t AudioDeviceModuleImpl::CreatePlatformSpecificObjects() { RTC_LOG(INFO) << __FUNCTION__; // Dummy ADM implementations if build flags are set. #if defined(WEBRTC_DUMMY_AUDIO_BUILD) audio_device_.reset(new AudioDeviceDummy()); RTC_LOG(INFO) << "Dummy Audio APIs will be utilized"; #elif defined(WEBRTC_DUMMY_FILE_DEVICES) audio_device_.reset(FileAudioDeviceFactory::CreateFileAudioDevice()); if (audio_device_) { RTC_LOG(INFO) << "Will use file-playing dummy device.";
  } else { // Create a dummy device instead. audio_device_.reset(new AudioDeviceDummy()); RTC_LOG(INFO) << "Dummy Audio APIs will be utilized";
  } // Real (non-dummy) ADM implementations. #else AudioLayer audio_layer(PlatformAudioLayer()); // Windows ADM implementation. #if defined(WEBRTC_WINDOWS_CORE_AUDIO_BUILD) if ((audio_layer == kWindowsCoreAudio) ||
      (audio_layer == kPlatformDefaultAudio)) { RTC_LOG(INFO) << "Attempting to use the Windows Core Audio APIs..."; if (Au