转自 https://www.cnblogs.com/hzl6255/p/12173595.html
阅读目录
- 1. 架构
- 2. Audio HAL
- 3. Native Audio
- 4. Java Audio
开始这篇文章之前,需要先了解<Linux音频编程>
1. 架构
在Android中,audio以分层的方式实现,从上到下依次为
- 应用框架: 提供android.media API
音频管理器: AudioManager
音频采集: MediaRecoder, AudioRecord
音频播放: SoundPool, MediaPlayer, AudioTrack
音频编解码: MediaCodec
- JNI: 通过调用libmedia库实现android.media所需的接口,在libandroid_runtime.so中体现
…
- HAL层: 实现audio_hw_device和audio_policy_hal, 实现与ALSA的音频接口, 实现音频路径的创建和连接
2. Audio HAL
Audio HAL架构使用的是比较复杂,混合了HIDL和Legacy HAL,笔者看的也是一头雾水
可参考<Android O Treble架构下Hal进程启动及HIDL服务注册过程>
2.1 HAL接口
Audio HAL提供的接口(以2.0为例)主要包括
// 音频设备
IDevice.hal|- initCheck()|- setMasterVolume(float): 设置除voice call外其他音频活动的音量|- getMasterVolume(): 获取主音量|- setMicMute(bool): 设置麦克风静音状态|- getMicMute(): 获取麦克风静音状态|- setMasterMute(bool): 设置静音状态|- getMasterMute(bool): 获取静音状态|- getInputBufferSize(AudioConfig): 获取音频输入缓冲区大小|- openOutputStream(*): 创建和打开音频硬件输出流|- openInputStream(*): 创建和打开音频硬件输入流|- supportsAudioPatches(): 判断HAL是否支持AudioPatch|- createAudioPatch(*): 为SRC和SINK创建AudioPatch|- releaseAudioPatch(*): 释放一个AudioPatch|- getAudioPort(*): 获取指定音频端口属性|- setAudioPortConfig(*): 配置音频端口|- getHwAvSync(): 获取设备的硬件同步源|- setScreenState(bool): 设置屏幕状态|- getParameters(vec<string>): 获取厂商定义的参数值|- setParameters(vec<ParameterValue>): 设置厂商定义的参数值// 音频代理
IDevicesFactory.hal|- openDevice(Device): 打开一个音频设备// 主音频设备
IPrimaryDevice.hal|- setMasterVolume(float): 设置voice call音量|- setMode(AudioMode): 设置音频模式|- getBtScoNrecEnabled(): 获取蓝牙ECNR使能状态|- setBtScoNrecEnabled(): 设置蓝牙ECNR使能状态|- getBtScoWidebandEnabled(): 获取蓝牙Wideband使能状态|- setBtScoWidebandEnabled(bool): 设置蓝牙Wideband使能状态|- getTtyMode(): 获取当前TTY模式|- setTtyMode(): 设置当前TTY模式|- getHacEnabled(): 获取HearingAid使能状态|- setHacEnabled(): 设置HearingAid使能状态// 音频流
IStream.hal|- getFrameSize(): 获取帧大小|- getFrameCount(): 获取缓冲区帧数|- getBufferSize(): 获取流的缓冲区大小|- getSampleRate(): 获取采样率(Hz)|- getSupportedSampleRates(): 获取流支持的支持的采样率(Hz)|- setSampleRate(uint32_t): 设置流的采样率|- getChannelMask(): 获取流的channel mask|- getSupportedChannelMasks(): 获取流支持的channel mask|- setChannelMask(): 获取流的channel mask|- getFormat(): 获取流的音频格式|- getSupportedFormats(): 获取流支持的音频格式|- setFormat(): 设置流的音频格式|- getAudioProperties(): 获取流参数|- addEffect(): 添加音效到流|- removeEffect(uint64_t): 从流上停止某音效|- standby(uint64_t): 让硬件输入输出进入standby模式|- getDevice(): 获取流连接的设备|- setDevice(): 连接设置到流|- setConnectedState(): 通知设备连接状态|- setHwAvSync(AudioHwSync): 设置硬件同步源|- getParameters(vec<string>): 获取厂商参数|- setParameters(vec<ParameterValue>): 设置厂商参数|- start(): 开始流操作(mmap模式)|- stop(): 停止流操作|- createMmapBuffer(): 获取audio mmap缓冲区信息|- getMmapPosition(): 读取audio mmap缓冲区读写位置|- close(): 关闭和释放流// 音频输入流
IStreamIn.hal|- getAudioSource(): 获取输入流的source描述|- setGain(): 设置音频驱动的输入增益|- prepareForReading(): 设置必需的传输层以从驱动接收音频缓冲区|- getInputFramesLost(): 获取丢失的输入帧的数量|- getCapturePosition(): 获取接收到的音频帧数与时钟时间。// 音频输出流
IStreamOut.hal|- getLatency(): 获取硬件传输延迟(毫秒)|- setVolume(float, float): 设置音量, 用于混音后|- prepareForWriting(): 设置必需的传输层将音频缓冲区传递给驱动|- getRenderPosition(): 获取音频DSP写入DAC的音频帧数|- getNextWriteTimestamp(): 获取下一次写入音频驱动时间(微秒)|- setCallback(): 设置回调接口, 用于非阻塞模式|- clearCallback(): 清楚回调|- supportsPauseAndResume(): HAL是否支持暂停和恢复流|- pause(): 暂停流|- resume(): 恢复流|- supportsDrain(): ???|- drain(): ???|- flush(): 刷新流|- getPresentationPosition(): 获取音频帧数// 音频输入流回调
IStreamOutCallback.hal|- onWriteReady(): 非阻塞写入已完成|- onDrainReady(): Drain(?)完成|- onError(): 出错
2.2 HIDL服务
/** Code: hardware/interfaces/audio/common/all-versions/default/service/service.cpp* Output: /vendor/bin/hw/android.hardware.audio@2.0-service*/
main()// 连接至vndservicemanagerandroid::ProcessState::initWithDriver("/dev/vndbinder") android::ProcessState::self()->startThreadPool()registerPassthroughServiceImplementation<audio::V4_0::IDevicesFactory>()registerPassthroughServiceImplementation<audio::V2_0::IDevicesFactory>()registerPassthroughServiceImplementation<audio::effect::V4_0::IEffectsFactory>()registerPassthroughServiceImplementation<audio::effect::V2_0::IEffectsFactory>()registerPassthroughServiceImplementation<soundtrigger::V2_1::ISoundTriggerHw>()registerPassthroughServiceImplementation<soundtrigger::V2_0::ISoundTriggerHw>()registerPassthroughServiceImplementation<bluetooth::a2dp::V1_0::IBluetoothAudioOffload>()android::hardware::joinRpcThreadpool()# cat /vendor/etc/init/android.hardware.audio@2.0-service.rc
service vendor.audio-hal-2-0 /vendor/bin/hw/android.hardware.audio@2.0-serviceclass haluser audioserveroneshotinterface android.hardware.audio@4.0::IDevicesFactory defaultinterface android.hardware.audio@2.0::IDevicesFactory default# /vendor/etc/vintf/manifest.xml
<manifest version="1.0" type="device" target-level="3">
...<hal format="hidl"><name>android.hardware.audio</name><transport>hwbinder</transport><version>4.0</version><interface><name>IDevicesFactory</name><instance>default</instance></interface></hal>
...
</manifest>
2.3 libaudiohal
libaudiohal封装了audio HIDL的接口,以libaudiohal.so的形式供AudioFlinger使用,而libaudiohal又使用了libaudiohal@4.0和libaudiohal@2.0两个库
# tree frameworks/av/media/libaudiohal
.
+--- 2.0
| +--- Android.bp -- libaudiohal@2.0
+--- 4.0
| +--- Android.bp -- libaudiohal@4.0
+--- Android.bp -- libaudiohal/** DevicesFactoryHalInterface提供了create和openDevice两个方法*/
DevicesFactoryHalInterface::create()/** 这里仅仅分析了V2_0版本, V4_0类似* 提供了两种方式来访问audio hardware* - HIDL: 即DevicesFactoryHalHidl, 用于primary, usb和remote_submix* - Legacy: 即DevicesFactoryHalLocal, 用于a2dp*/new DevicesFactoryHalHybrid()new DevicesFactoryHalLocal()new DevicesFactoryHalHidl()hardware::audio::V2_0::IDevicesFactory::getService()DevicesFactoryHalInterface::openDevice(char *name, DeviceHalInterface *device)DevicesFactoryHalHybrid::openDevice(name, device)// 针对hearing_aid和a2dp设备DevicesFactoryHalLocal::openDevice(name, device)load_audio_interface(name, audio_hw_device_t **dev)new DeviceHalLocal(dev)// 针对其他设备, 包括primary, usb, remote_submix设备DevicesFactoryHalHidl::openDevice(name, device)// 获取HIDL接口名nameFromHal(name, IDevicesFactory::Device &)IDevicesFactory::openDevice()new DeviceHalHidl(IDevice)/* 这里暂不做分析 */
EffectsFactoryHalInterface::create()
3. Native Audio
3.1 介绍
Native Audio服务在Android N之前存在于mediaserver中,Android N之后以audioserver形式存在
audioserver启动了两个Native binder服务
- AudioFlinger: 音频系统策略的执行者, 负责音频流设备的管理及音频流数据的处理传输
- AudioPolicyService: 音频系统策略的制定者, 负责音频设备切换的策略抉择、音量调节策略等
值得注意的AudioFlinger和AudioPolicyService提供了binder服务,然后Java层并不是直接使用这些服务,而是Native层将这些binder服务封装为C++ android::media::*接口,然后通过JNI的方式提供给Jave层使用
// android::media::* <===> frameworks/av/media/libaudioclient/
// JNI <===> frameworks/base/core/jni
-----------------------------------------------------
| android::media::* | JNI |
-----------------------------------------------------
| AudioSystem.cpp | android_media_AudioSystem.cpp |
| AudioRecord.cpp | android_media_AudioRecord.cpp |
| AudioTrack.cpp | android_media_AudioTrack.cpp |
-----------------------------------------------------
audioserver的启动的详细过程如下
/** frameworks/av/media/audioserver/audioserver.rc*/
# cat audioserver.rc
service audioserver /system/bin/audioserverclass coreuser audioserveronrestart restart vendor.audio-hal-2-0onrestart restart audio-hal-2-0
/** Code: frameworks/av/media/audioserver/main_audioserver.cpp* Output: /system/bin/audioserver*/
main()AudioFlinger::instantiate()BinderService::instantiate()BinderService::publish()IServiceManager sm = defaultServiceManager()sm::addService("media.audio_flinger", new AudioFlinger())AudioFlinger::onFirstRef()new PatchPanel(this)gAudioFlinger = this;AudioPolicyService::instantiate()sm::addService("media.audio_policy", new AudioPolicyService())AudioPolicyService::onFirstRef()// Tone播放线程new AudioCommandThread("ApmTone", this)AudioCommandThread::onFirstRef()Thread::run()AudioCommandThread::threadLoop()// Audio命令线程new AudioCommandThread("ApmAudio", this)// 输出命令线程new AudioCommandThread("ApmOutput", this)new AudioPolicyClient(this)createAudioPolicyManager()new AudioPolicyManager(mAudioPolicyClient)AudioPolicyManager::AudioPolicyManager()/** 当定义了USE_XML_AUDIO_POLICY_CONF = 1* 加载/odm/etc/audio_policy_configuration.xml* /vendor/etc/audio_policy_configuration.xml* /system/etc/audio_policy_configuration.xml* 否则加载* /system/etc/audio_policy.conf* /vendor/etc/audio_policy.conf*/AudioPolicyManager::loadConfig()deserializeAudioPolicyXmlConfig()// FIXME: Do a lot of thingsAudioPolicyManager::initialize()new AudioPolicyEffects()new UidPolicy(this)UidPolicy::registerSelf()// Oboe ServiceAAudioService::instantiate()SoundTriggerHwService::instantiate()
3.2 AudioFlinger
AudioFlinger服务通过binder服务方式提供对外提供了服务
// 接口定义
frameworks/av/include/media/IAudioFlinger.h // 实现AudioFlinger本地端
libaudioflinger <==> frameworks/av/services/audioflinger/* // 实现AudioFlinger远程端
libaudioclient <==> frameworks/av/media/libaudioclient/*
AudioFlinger主要提供了如下接口
3.3 AudioPolicyService
// 接口定义
frameworks/av/include/media/IAudioPolicyService.h// 实现AudioPolicyService本地端
libaudiopolicyservice <==> frameworks/av/services/audiopolicy/* // 实现AudioPolicyService远程端
libaudioclient <==> frameworks/av/media/libaudioclient/*
4. Java Audio
在Java层把Audio从功能上分为三类接口
AudioSystem: API为AudioManager,负责的是Audio系统的综合管理功能
AudioTrack: 负责音频数据的输出,即播放
AudioRecorder: 负责音频数据的输出和输入,即录制
4.1 AudioService
AudioService由SystemServer启动,继承自IAudioService.Stub(通过IAudioService.aidl自动生成的),AudioService位于IAudioService的Bn端;
AudioManager拥有IAudioService的Bp端,是AudioService在客户端的一个代理,几乎所有客户端对AudioManager进行的请求,最终都会交由AudioService实现
AudioService的功能实现依赖AudioSystem类,AudioSystem是Java层到native层的代理;AudioService将通过AudioSystem与AudioPolicyService以及AudioFlinger进行交互
IAudioService - frameworks/base/media/java/android/media/IAudioService.aidl)
SystemServer::startOtherServices()SystemServiceManager::startService(AudioService.Lifecycle.class)new AudioService()AudioService::onStart()publishBinderService(Context.AUDIO_SERVICE, new AudioService());// 向servicemanager注册服务ServiceManager.addService( , , , )AudioService::onBootPhase(SystemService.PHASE_ACTIVITY_MANAGER_READY)AudioService::systemReady()AudioHandler::handleMessage(MSG_SYSTEM_READY)AudioService::onSystemReady()AudioHandler::onLoadSoundEffects()// Bluetooth releated configAudioService::onIndicateSystemReady()// 位于android_media_AudioSystem.cppandroid_media_AudioSystem_systemReady()// 位于AudioSystem.cppAudioSystem::systemReady()
AudioService主要是作为AudioManager的后端服务存在
@SystemService(Context.AUDIO_SERVICE)
public class AudioManager {...private static IAudioService getService(){if (sService != null) {return sService;}IBinder b = ServiceManager.getService(Context.AUDIO_SERVICE);sService = IAudioService.Stub.asInterface(b);return sService;}...
}
AudioService/AudioManager的功能主要包括下面三个方面
- 音量控制
- 音频IO设备的管理
- 音频焦点机制
4.2 AudioTrack
Java层的AudioTrack通过JNI的方式使用android_media_AudioTrack.cpp封装的接口,然后调用到libaudioclient中的的android::media::AudioTrack
111
4.3 AudioRecord
Java层的AudioRecord通过JNI的方式使用android_media_AudioRecord.cpp封装的接口
OpenSL ES 和 AAudio
https://www.jianshu.com/p/a676d4d959ae
https://source.android.google.cn/devices/audio/implement
https://blog.csdn.net/Ciellee/article/details/101980726
https://blog.csdn.net/shell812/article/details/73467010
https://www.jianshu.com/p/cf98b3cc6767
https://blog.csdn.net/u013928208/article/details/81667162