iOS 仿微信语音输入动画

news/2024/12/5 7:20:30/

这篇是接着上一篇文章, 完成一个随着语音输入大小的变化, 而变化的动画.

//
//  PBSpeechRecognizer.h
//  ParkBest
//
//  Created by summerxx27 on 2018/10/30.
//  Copyright © 2018年 summerxx27. All rights reserved.
//
#import <Foundation/Foundation.h>NS_ASSUME_NONNULL_BEGIN
@protocol PBSpeechRecognizerProtocol <NSObject>
@optional
- (void)recognitionSuccess:(NSString *)result;
- (void)recognitionFail:(NSString *)result;
- (void)level:(float)value;
@end
@interface PBSpeechRecognizer : NSObject
@property(nonatomic,weak) id<PBSpeechRecognizerProtocol> delegate;
- (void)startR;
- (void)stopR;
@endNS_ASSUME_NONNULL_END
//
//  PBSpeechRecognizer.m
//  ParkBest
//
//  Created by summerxx27 on 2018/10/30.
//  Copyright © 2018年 summerxx27. All rights reserved.
//#import "PBSpeechRecognizer.h"
#import <Speech/Speech.h>
API_AVAILABLE(ios(10.0))
@interface PBSpeechRecognizer()
@property (nonatomic, strong) AVAudioEngine *audioEngine;
@property (nonatomic, strong) SFSpeechRecognizer *speechRecognizer;
@property (nonatomic, strong) SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
@property (nonatomic, strong) AVAudioRecorder *recorder;
@property (nonatomic, strong) NSTimer *levelTimer;
@end
@implementation PBSpeechRecognizer- (void)startR {if (!self.speechRecognizer) {// 设置语言NSLocale *locale = [NSLocale localeWithLocaleIdentifier:@"zh-CN"];if (@available(iOS 10.0, *)) {self.speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];} else {// Fallback on earlier versions}}if (!self.audioEngine) {self.audioEngine = [[AVAudioEngine alloc] init];}AVAudioSession *audioSession = [AVAudioSession sharedInstance];if (@available(iOS 10.0, *)) {[audioSession setCategory:AVAudioSessionCategoryRecord mode:AVAudioSessionModeMeasurement options:AVAudioSessionCategoryOptionDuckOthers error:nil];} else {// Fallback on earlier versions}[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];if (self.recognitionRequest) {[self.recognitionRequest endAudio];self.recognitionRequest = nil;}if (@available(iOS 10.0, *)) {self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];} else {// Fallback on earlier versions}self.recognitionRequest.shouldReportPartialResults = YES; // 实时翻译if (@available(iOS 10.0, *)) {[self.speechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {if (result.isFinal) {NSLog(@"is final: %d  result: %@", result.isFinal, result.bestTranscription.formattedString);if ([self.delegate respondsToSelector:@selector(recognitionSuccess:)]) {[self.delegate recognitionSuccess:result.bestTranscription.formattedString];}}else {if ([self.delegate respondsToSelector:@selector(recognitionFail:)]) {
//                    [self.delegate recognitionFail:error.domain];}}}];} else {// Fallback on earlier versions}AVAudioFormat *recordingFormat = [[self.audioEngine inputNode] outputFormatForBus:0];[[self.audioEngine inputNode] installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {[self.recognitionRequest appendAudioPCMBuffer:buffer];}];[self.audioEngine prepare];[self.audioEngine startAndReturnError:nil];/// 检测声音[[AVAudioSession sharedInstance]setCategory: AVAudioSessionCategoryPlayAndRecord error: nil];/// 不需要保存录音文件NSURL *url = [NSURL fileURLWithPath:@"/dev/null"];NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithFloat: 44100.0], AVSampleRateKey,[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,[NSNumber numberWithInt: 2], AVNumberOfChannelsKey,[NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey,nil];NSError *error;_recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];if (_recorder){[_recorder prepareToRecord];_recorder.meteringEnabled = YES;[_recorder record];_levelTimer = [NSTimer scheduledTimerWithTimeInterval: 1 target: self selector: @selector(levelTimerCallback:) userInfo: nil repeats: YES];}else{NSLog(@"%@", [error description]);}}
/// 开始语音输入后, 开启一个定时器, 来检测声音的大小
- (void)levelTimerCallback:(NSTimer *)timer {[_recorder updateMeters];float   level;                // The linear 0.0 .. 1.0 value we need.float   minDecibels = -80.0f; // Or use -60dB, which I measured in a silent room.float   decibels    = [_recorder averagePowerForChannel:0];if (decibels < minDecibels){level = 0.0f;}else if (decibels >= 0.0f){level = 1.0f;}else{float   root            = 2.0f;float   minAmp          = powf(10.0f, 0.05f * minDecibels);float   inverseAmpRange = 1.0f / (1.0f - minAmp);float   amp             = powf(10.0f, 0.05f * decibels);float   adjAmp          = (amp - minAmp) * inverseAmpRange;level = powf(adjAmp, 1.0f / root);}/// level 范围[0 ~ 1], 转为[0 ~120] 之间/// 通过这个delegate来回调到使用的类中if ([self.delegate respondsToSelector:@selector(level:)]) {[self.delegate level:120 * level];}
}- (void)stopR {[_levelTimer invalidate];[[self.audioEngine inputNode] removeTapOnBus:0];[self.audioEngine stop];[self.recognitionRequest endAudio];self.recognitionRequest = nil;
}
@end

通过Value的值来动态切换图片就可以了, 或者不使用图片而自己绘制话筒旁边的小横线.

- (void)level:(float)value {if (0 < value && value < 10) {_voiceView.image = [UIImage imageNamed:@"v_1"];}else if (value > 10 && value < 20) {_voiceView.image = [UIImage imageNamed:@"v_2"];}else if (value > 20 && value < 25) {_voiceView.image = [UIImage imageNamed:@"v_3"];}else if (value > 25 && value < 35) {_voiceView.image = [UIImage imageNamed:@"v_4"];}else if (value > 35 && value < 45) {_voiceView.image = [UIImage imageNamed:@"v_5"];}else if (value > 45 ) {_voiceView.image = [UIImage imageNamed:@"v_6"];}
}

这里是长按方法

- (void)longPress:(UILongPressGestureRecognizer *)gestureRecognizer{CGPoint point = [gestureRecognizer locationInView:self.view];if(gestureRecognizer.state == UIGestureRecognizerStateBegan) {[self startRecording];} else if(gestureRecognizer.state == UIGestureRecognizerStateEnded) {[self stopRecording];} else if(gestureRecognizer.state == UIGestureRecognizerStateChanged) {NSLog(@"y ========== %f", point.y);/// 判断y滑动到一定的值, 且取消语音的识别, 这里可以通过逻辑简单控制下if (point.y < 513) {_cancel = @"yes";NSLog(@"voice cencel");}} else if (gestureRecognizer.state == UIGestureRecognizerStateFailed) {} else if (gestureRecognizer.state == UIGestureRecognizerStateCancelled) {}
}

当然这里是一个简单的模拟, 更多细节待完善, 看似简单,实则不然. sad


http://www.ppmy.cn/news/556227.html

相关文章

iOS仿微信聊天页面长按气泡弹窗

显示效果图如上&#xff0c;&#x1f447;下面有gif ⏰‼️‼️如何使用&#xff1f; 代码地址&#xff1a;https://github.com/JackYoung1989/WechatSimilarBubbleMenu.git 将代码拉下来&#xff0c;将JYBubbleButtonModel、JYBubbleMenuView、JYTextView三个类添加到您的工程…

iOS 唤起微信小程序

最近做了一个新功能。App里面点击按钮&#xff0c;唤起微信小程序。 iOS 唤起微信小程序 App配置微信开发者平台配置 方案1 sharesdk&#xff1a;方案2&#xff1a;WechatOpenSDK&#xff08;推荐&#xff09; App配置 稍后再说applink 的配置步骤。 上面的这些配置数据都需要…

iOS制作微信(weChat)支付SDK过程

Git下载地址 微信支付可支付的demo NOTICE该SDK包含了 微信分享 微信登录等 已经做成组件在cocoapods里 可以搜索 -> WeChatPaySDK (0.0.2)A short description of WeChatPaySDK.pod WeChatPaySDK, ~> 0.0.2- Homepage: https://github.com/7General- Source: https…

iOS集成微信支付

一、微信支付&#xff0c;首先利用CocoaPods导入微信SDK pod WechatOpenSDK 二、创建微信支付管理类 WechatManager // WechatManager.h #import <Foundation/Foundation.h> #import <WXApi.h>interface WechatManager : NSObject (id)shareInstance; (BOOL)ha…

iOS6和iOS7环境下微信登录未显示问题微信IOS的SDK:isWXAppInstalled总是返回NO和nil...

iOS6和iOS7环境下微信登录未显示问题&微信IOS的SDK&#xff1a;isWXAppInstalled总是返回NO和nil 一、问题描述&#xff1a; iOS6和iOS7 环境下未显示微信登录界面&#xff0c;在其他环境下显示正常。 二、问题解决&#xff1a; iOS6和7未出现微信登录按钮, 原因 [WXApi is…

微信iOS7.0.9更新!除了朋友圈可以评论表情包,还有这些你可能不知道的功能!

微信又㕛叒叕更新了~这次是iOS更新~ 本次更新版本号为7.0.9&#xff0c;官方更新日志只是简单提及“发消息时&#xff0c;可引用之前的内容”&#xff0c;并未说明其他的更新&#xff01; 接下来&#xff0c;我们一起看看有哪些重大更新&#xff1f;&#xff01; 一、引用功能…

7-59 哈夫曼编码译码

编写一个哈夫曼编码译码程序。 按词频从小到大的顺序给出各个字符&#xff08;不超过30个&#xff09;的词频&#xff0c;根据词频构造哈夫曼树&#xff0c;给出每个字符的哈夫曼编码&#xff0c;并对给出的语句进行译码。 为确保构建的哈夫曼树唯一&#xff0c;本题做如下限…

团体程序设计天梯赛-练习集L2篇②

&#x1f680;欢迎来到本文&#x1f680; &#x1f349;个人简介&#xff1a;Hello大家好呀&#xff0c;我是陈童学&#xff0c;一个与你一样正在慢慢前行的普通人。 &#x1f3c0;个人主页&#xff1a;陈童学哦CSDN &#x1f4a1;所属专栏&#xff1a;PTA &#x1f381;希望各…