iOS中 语音识别功能/语音转文字教程详解

 

iOS中 语音识别功能/语音转文字教程详解 


前言:最近研究了一下语音识别,从百度语音识别到讯飞语音识别;首先说一下个人针对两者的看法,讯飞毫无疑问比较专业,识别率也很高真对语音识别是比较精准的,但是很多开发者和我一样期望离线识别,而讯飞离线是收费的;请求次数来讲,两者都可以申请高配额,真对用户较多的几乎都一样。基于免费并且支持离线我选择了百度离线语音识别。比较简单,UI设计多一点,下面写一下教程:

1.首先:需要的库

2.我是自定义的UI所以以功能实现为主(头文件)

[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. // 头文件  
  2. #import "BDVRCustomRecognitonViewController.h"  
  3. #import "BDVRClientUIManager.h"  
  4. #import "WBVoiceRecordHUD.h"  
  5. #import "BDVRViewController.h"  
  6. #import "MyViewController.h"  
  7. #import "BDVRSConfig.h"  

3.需要知道的功能:能用到的如下:

[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. //-------------------类方法------------------------  
  2. // 创建语音识别客户对像,该对像是个单例  
  3. + (BDVoiceRecognitionClient *)sharedInstance;  
  4.   
  5. // 释放语音识别客户端对像  
  6. + (void)releaseInstance;  
  7.   
  8.   
  9. //-------------------识别方法-----------------------  
  10. // 判断是否可以录音  
  11. - (BOOL)isCanRecorder;  
  12.   
  13. // 开始语音识别,需要实现MVoiceRecognitionClientDelegate代理方法,并传入实现对像监听事件  
  14. // 返回值参考 TVoiceRecognitionStartWorkResult  
  15. - (int)startVoiceRecognition:(id<MVoiceRecognitionClientDelegate>)aDelegate;  
  16.   
  17. // 说完了,用户主动完成录音时调用  
  18. - (void)speakFinish;  
  19.   
  20. // 结束本次语音识别  
  21. - (void)stopVoiceRecognition;  
  22.   
  23. /** 
  24.  * @brief 获取当前识别的采样率 
  25.  * 
  26.  * @return 采样率(16000/8000) 
  27.  */  
  28. - (int)getCurrentSampleRate;  
  29.   
  30. /** 
  31.  * @brief 得到当前识别模式(deprecated) 
  32.  * 
  33.  * @return 当前识别模式 
  34.  */  
  35. - (int)getCurrentVoiceRecognitionMode __attribute__((deprecated));  
  36.   
  37. /** 
  38.  * @brief 设置当前识别模式(deprecated),请使用-(void)setProperty:(TBDVoiceRecognitionProperty)property; 
  39.  * 
  40.  * @param 识别模式 
  41.  * 
  42.  * @return 是否设置成功 
  43.  */  
  44. - (void)setCurrentVoiceRecognitionMode:(int)aMode __attribute__((deprecated));  
  45.   
  46. // 设置识别类型  
  47. - (void)setProperty:(TBDVoiceRecognitionProperty)property __attribute__((deprecated));  
  48.   
  49. // 获取当前识别类型  
  50. - (int)getRecognitionProperty __attribute__((deprecated));  
  51.   
  52. // 设置识别类型列表, 除EVoiceRecognitionPropertyInput和EVoiceRecognitionPropertySong外  
  53. // 可以识别类型复合  
  54. - (void)setPropertyList: (NSArray*)prop_list;  
  55.   
  56. // cityID仅对EVoiceRecognitionPropertyMap识别类型有效  
  57. - (void)setCityID: (NSInteger)cityID;  
  58.   
  59. // 获取当前识别类型列表  
  60. - (NSArray*)getRecognitionPropertyList;  
  61.   
  62. //-------------------提示音-----------------------  
  63. // 播放提示音,默认为播放,录音开始,录音结束提示音  
  64. // BDVoiceRecognitionClientResources/Tone  
  65. // record_start.caf   录音开始声音文件  
  66. // record_end.caf     录音结束声音文件  
  67. // 声音资源需要加到项目工程里,用户可替换资源文件,文件名不可以变,建音提示音不宜过长,0。5秒左右。  
  68. // aTone 取值参考 TVoiceRecognitionPlayTones,如没有找到文件,则返回NO  
  69. - (BOOL)setPlayTone:(int)aTone isPlay:(BOOL)aIsPlay;  

4.录音按钮相关动画(我自定义的,大家可以借鉴)

每日更新关注:http://weibo.com/hanjunqiang  新浪微博

[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. // 录音按钮相关  
  2. @property (nonatomic, weak, readonlyUIButton *holdDownButton;// 说话按钮  
  3. /** 
  4.  *  是否取消錄音 
  5.  */  
  6. @property (nonatomic, assign, readwriteBOOL isCancelled;  
  7.   
  8. /** 
  9.  *  是否正在錄音 
  10.  */  
  11. @property (nonatomic, assign, readwriteBOOL isRecording;  
  12. /** 
  13.  *  当录音按钮被按下所触发的事件,这时候是开始录音 
  14.  */  
  15. - (void)holdDownButtonTouchDown;  
  16.   
  17. /** 
  18.  *  当手指在录音按钮范围之外离开屏幕所触发的事件,这时候是取消录音 
  19.  */  
  20. - (void)holdDownButtonTouchUpOutside;  
  21.   
  22. /** 
  23.  *  当手指在录音按钮范围之内离开屏幕所触发的事件,这时候是完成录音 
  24.  */  
  25. - (void)holdDownButtonTouchUpInside;  
  26.   
  27. /** 
  28.  *  当手指滑动到录音按钮的范围之外所触发的事件 
  29.  */  
  30. - (void)holdDownDragOutside;  

5.初始化系统UI

[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. #pragma mark - layout subViews UI  
  2.   
  3. /** 
  4.  *  根据正常显示和高亮状态创建一个按钮对象 
  5.  * 
  6.  *  @param image   正常显示图 
  7.  *  @param hlImage 高亮显示图 
  8.  * 
  9.  *  @return 返回按钮对象 
  10.  */  
  11. - (UIButton *)createButtonWithImage:(UIImage *)image HLImage:(UIImage *)hlImage ;  
  12. - (void)holdDownDragInside;  
  13. - (void)createInitView; // 创建初始化界面,播放提示音时会用到  
  14. - (void)createRecordView;  // 创建录音界面  
  15. - (void)createRecognitionView; // 创建识别界面  
  16. - (void)createErrorViewWithErrorType:(int)aStatus; // 在识别view中显示详细错误信息  
  17. - (void)createRunLogWithStatus:(int)aStatus; // 在状态view中显示详细状态信息  
  18.   
  19. - (void)finishRecord:(id)sender; // 用户点击完成动作  
  20. - (void)cancel:(id)sender; // 用户点击取消动作  
  21.   
  22. - (void)startVoiceLevelMeterTimer;  
  23. - (void)freeVoiceLevelMeterTimerTimer;  

6.最重要的部分


[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. // 录音完成  
  2.  [[BDVoiceRecognitionClient sharedInstance] speakFinish];  

[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. // 取消录音  
  2. [[BDVoiceRecognitionClient sharedInstance] stopVoiceRecognition];  

7.两个代理方法

[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. - (void)VoiceRecognitionClientWorkStatus:(int)aStatus obj:(id)aObj  
  2. {  
  3.     switch (aStatus)  
  4.     {  
  5.         case EVoiceRecognitionClientWorkStatusFlushData// 连续上屏中间结果  
  6.         {  
  7.             NSString *text = [aObj objectAtIndex:0];  
  8.               
  9.             if ([text length] > 0)  
  10.             {  
  11. //                [clientSampleViewController logOutToContinusManualResut:text];  
  12.                   
  13.                 UILabel *clientWorkStatusFlushLabel = [[UILabel alloc]initWithFrame:CGRectMake(kScreenWidth/2 - 100,64,200,60)];  
  14.                 clientWorkStatusFlushLabel.text = text;  
  15.                 clientWorkStatusFlushLabel.textAlignment = NSTextAlignmentCenter;  
  16.                 clientWorkStatusFlushLabel.font = [UIFont systemFontOfSize:18.0f];  
  17.                 clientWorkStatusFlushLabel.numberOfLines = 0;  
  18.                 clientWorkStatusFlushLabel.backgroundColor = [UIColor whiteColor];  
  19.                 [self.view addSubview:clientWorkStatusFlushLabel];  
  20.                   
  21.             }  
  22.   
  23.             break;  
  24.         }  
  25.         case EVoiceRecognitionClientWorkStatusFinish// 识别正常完成并获得结果  
  26.         {  
  27.             [self createRunLogWithStatus:aStatus];  
  28.               
  29.             if ([[BDVoiceRecognitionClient sharedInstance] getRecognitionProperty] != EVoiceRecognitionPropertyInput)  
  30.             {  
  31.                 //  搜索模式下的结果为数组,示例为  
  32.                 // ["公园", "公元"]  
  33.                 NSMutableArray *audioResultData = (NSMutableArray *)aObj;  
  34.                 NSMutableString *tmpString = [[NSMutableString alloc] initWithString:@""];  
  35.                   
  36.                 for (int i=0; i < [audioResultData count]; i++)  
  37.                 {  
  38.                     [tmpString appendFormat:@"%@\r\n",[audioResultData objectAtIndex:i]];  
  39.                 }  
  40.                   
  41.                 clientSampleViewController.resultView.text = nil;  
  42.                 [clientSampleViewController logOutToManualResut:tmpString];  
  43.                   
  44.             }  
  45.             else  
  46.             {  
  47.                 NSString *tmpString = [[BDVRSConfig sharedInstance] composeInputModeResult:aObj];  
  48.                 [clientSampleViewController logOutToContinusManualResut:tmpString];  
  49.                   
  50.             }  
  51.              
  52.             if (self.view.superview)  
  53.             {  
  54.                 [self.view removeFromSuperview];  
  55.             }  
  56.               
  57.             break;  
  58.         }  
  59.         case EVoiceRecognitionClientWorkStatusReceiveData:  
  60.         {  
  61.             // 此状态只有在输入模式下使用  
  62.             // 输入模式下的结果为带置信度的结果,示例如下:  
  63.             //  [  
  64.             //      [  
  65.             //         {  
  66.             //             "百度" = "0.6055192947387695";  
  67.             //         },  
  68.             //         {  
  69.             //             "摆渡" = "0.3625582158565521";  
  70.             //         },  
  71.             //      ]  
  72.             //      [  
  73.             //         {  
  74.             //             "一下" = "0.7665404081344604";  
  75.             //         }  
  76.             //      ],  
  77.             //   ]  
  78. //暂时关掉 -- 否则影响跳转结果  
  79. //            NSString *tmpString = [[BDVRSConfig sharedInstance] composeInputModeResult:aObj];  
  80. //            [clientSampleViewController logOutToContinusManualResut:tmpString];  
  81.               
  82.             break;  
  83.         }  
  84.         case EVoiceRecognitionClientWorkStatusEnd// 用户说话完成,等待服务器返回识别结果  
  85.         {  
  86.             [self createRunLogWithStatus:aStatus];  
  87.             if ([BDVRSConfig sharedInstance].voiceLevelMeter)  
  88.             {  
  89.                 [self freeVoiceLevelMeterTimerTimer];  
  90.             }  
  91.               
  92.             [self createRecognitionView];  
  93.               
  94.             break;  
  95.         }  
  96.         case EVoiceRecognitionClientWorkStatusCancel:  
  97.         {              
  98.             if ([BDVRSConfig sharedInstance].voiceLevelMeter)   
  99.             {  
  100.                 [self freeVoiceLevelMeterTimerTimer];  
  101.             }  
  102.               
  103.             [self createRunLogWithStatus:aStatus];    
  104.               
  105.             if (self.view.superview)   
  106.             {  
  107.                 [self.view removeFromSuperview];  
  108.             }  
  109.             break;  
  110.         }  
  111.         case EVoiceRecognitionClientWorkStatusStartWorkIng// 识别库开始识别工作,用户可以说话  
  112.         {  
  113.             if ([BDVRSConfig sharedInstance].playStartMusicSwitch// 如果播放了提示音,此时再给用户提示可以说话  
  114.             {  
  115.                 [self createRecordView];  
  116.             }  
  117.               
  118.             if ([BDVRSConfig sharedInstance].voiceLevelMeter)  // 开启语音音量监听  
  119.             {  
  120.                 [self startVoiceLevelMeterTimer];  
  121.             }  
  122.               
  123.             [self createRunLogWithStatus:aStatus];   
  124.   
  125.             break;  
  126.         }  
  127.         case EVoiceRecognitionClientWorkStatusNone:  
  128.         case EVoiceRecognitionClientWorkPlayStartTone:  
  129.         case EVoiceRecognitionClientWorkPlayStartToneFinish:  
  130.         case EVoiceRecognitionClientWorkStatusStart:  
  131.         case EVoiceRecognitionClientWorkPlayEndToneFinish:  
  132.         case EVoiceRecognitionClientWorkPlayEndTone:  
  133.         {  
  134.             [self createRunLogWithStatus:aStatus];  
  135.             break;  
  136.         }  
  137.         case EVoiceRecognitionClientWorkStatusNewRecordData:  
  138.         {  
  139.             break;  
  140.         }  
  141.         default:  
  142.         {  
  143.             [self createRunLogWithStatus:aStatus];  
  144.             if ([BDVRSConfig sharedInstance].voiceLevelMeter)   
  145.             {  
  146.                 [self freeVoiceLevelMeterTimerTimer];  
  147.             }  
  148.             if (self.view.superview)   
  149.             {  
  150.                 [self.view removeFromSuperview];  
  151.             }  
  152.    
  153.             break;  
  154.         }  
  155.     }  
  156. }  

[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. - (void)VoiceRecognitionClientNetWorkStatus:(int) aStatus  
  2. {  
  3.     switch (aStatus)   
  4.     {  
  5.         case EVoiceRecognitionClientNetWorkStatusStart:  
  6.         {     
  7.             [self createRunLogWithStatus:aStatus];  
  8.             [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES];  
  9.             break;     
  10.         }  
  11.         case EVoiceRecognitionClientNetWorkStatusEnd:  
  12.         {  
  13.             [self createRunLogWithStatus:aStatus];  
  14.             [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO];  
  15.             break;     
  16.         }            
  17.     }  
  18. }  

8.录音按钮的一些操作


[objc]  view plain  copy
  在CODE上查看代码片 派生到我的代码片
  1. #pragma mark ------ 关于按钮操作的一些事情-------  
  2. - (void)holdDownButtonTouchDown {  
  3.     // 开始动画  
  4.     _disPlayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(delayAnimation)];  
  5.     _disPlayLink.frameInterval = 40;  
  6.     [_disPlayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];  
  7.       
  8.     self.isCancelled = NO;  
  9.     self.isRecording = NO;  
  10.       
  11.  // 开始语音识别功能,之前必须实现MVoiceRecognitionClientDelegate协议中的VoiceRecognitionClientWorkStatus:obj方法  
  12.     int startStatus = -1;  
  13.     startStatus = [[BDVoiceRecognitionClient sharedInstance] startVoiceRecognition:self];  
  14.     if (startStatus != EVoiceRecognitionStartWorking) // 创建失败则报告错误  
  15.     {  
  16.         NSString *statusString = [NSString stringWithFormat:@"%d",startStatus];  
  17.         [self performSelector:@selector(firstStartError:) withObject:statusString afterDelay:0.3];  // 延迟0.3秒,以便能在出错时正常删除view  
  18.         return;  
  19.     }  
  20.     // "按住说话-松开搜索"提示  
  21.     [voiceImageStr removeFromSuperview];  
  22.     voiceImageStr = [[UIImageView alloc]initWithFrame:CGRectMake(kScreenWidth/2 - 40, kScreenHeight - 1538033)];  
  23.     voiceImageStr.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:@"searchVoice"]];  
  24.     [self.view addSubview:voiceImageStr];  
  25.      
  26. }  
  27.   
  28. - (void)holdDownButtonTouchUpOutside {  
  29.     // 结束动画  
  30.     [self.view.layer removeAllAnimations];  
  31.     [_disPlayLink invalidate];  
  32.     _disPlayLink = nil;  
  33.       
  34.     // 取消录音  
  35.     [[BDVoiceRecognitionClient sharedInstance] stopVoiceRecognition];  
  36.       
  37.     if (self.view.superview)  
  38.     {  
  39.         [self.view removeFromSuperview];  
  40.     }  
  41. }  
  42.   
  43. - (void)holdDownButtonTouchUpInside {  
  44.     // 结束动画  
  45.     [self.view.layer removeAllAnimations];  
  46.     [_disPlayLink invalidate];  
  47.     _disPlayLink = nil;  
  48.       
  49.     [[BDVoiceRecognitionClient sharedInstance] speakFinish];  
  50. }  
  51.   
  52. - (void)holdDownDragOutside {  
  53.       
  54.     //如果已經開始錄音了, 才需要做拖曳出去的動作, 否則只要切換 isCancelled, 不讓錄音開始.  
  55.     if (self.isRecording) {  
  56. //        if ([self.delegate respondsToSelector:@selector(didDragOutsideAction)]) {  
  57. //            [self.delegate didDragOutsideAction];  
  58. //        }  
  59.     } else {  
  60.         self.isCancelled = YES;  
  61.     }  
  62. }  
  63.   
  64.   
  65. #pragma mark - layout subViews UI  
  66.   
  67. - (UIButton *)createButtonWithImage:(UIImage *)image HLImage:(UIImage *)hlImage {  
  68.     UIButton *button = [[UIButton alloc] initWithFrame:CGRectMake(kScreenWidth/2 -36, kScreenHeight - 1207272)];  
  69.       
  70.     if (image)  
  71.         [button setBackgroundImage:image forState:UIControlStateNormal];  
  72.     if (hlImage)  
  73.         [button setBackgroundImage:hlImage forState:UIControlStateHighlighted];  
  74.       
  75.     return button;  
  76. }  
  77.   
  78. #pragma mark ----------- 动画部分 -----------  
  79. - (void)startAnimation  
  80. {  
  81.     CALayer *layer = [[CALayer alloc] init];  
  82.     layer.cornerRadius = [UIScreen mainScreen].bounds.size.width/2;  
  83.     layer.frame = CGRectMake(00, layer.cornerRadius * 2, layer.cornerRadius * 2);  
  84.     layer.position = CGPointMake([UIScreen mainScreen].bounds.size.width/2,[UIScreen mainScreen].bounds.size.height - 84);  
  85.     //    self.view.layer.position;  
  86.     UIColor *color = [UIColor colorWithRed:arc4random()%10*0.1 green:arc4random()%10*0.1 blue:arc4random()%10*0.1 alpha:1];  
  87.     layer.backgroundColor = color.CGColor;  
  88.     [self.view.layer addSublayer:layer];  
  89.       
  90.     CAMediaTimingFunction *defaultCurve = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionDefault];  
  91.       
  92.     _animaTionGroup = [CAAnimationGroup animation];  
  93.     _animaTionGroup.delegate = self;  
  94.     _animaTionGroup.duration = 2;  
  95.     _animaTionGroup.removedOnCompletion = YES;  
  96.     _animaTionGroup.timingFunction = defaultCurve;  
  97.       
  98.     CABasicAnimation *scaleAnimation = [CABasicAnimation animationWithKeyPath:@"transform.scale.xy"];  
  99.     scaleAnimation.fromValue = @0.0;  
  100.     scaleAnimation.toValue = @1.0;  
  101.     scaleAnimation.duration = 2;  
  102.       
  103.     CAKeyframeAnimation *opencityAnimation = [CAKeyframeAnimation animationWithKeyPath:@"opacity"];  
  104.     opencityAnimation.duration = 2;  
  105.     opencityAnimation.values = @[@0.8,@0.4,@0];  
  106.     opencityAnimation.keyTimes = @[@0,@0.5,@1];  
  107.     opencityAnimation.removedOnCompletion = YES;  
  108.       
  109.     NSArray *animations = @[scaleAnimation,opencityAnimation];  
  110.     _animaTionGroup.animations = animations;  
  111.     [layer addAnimation:_animaTionGroup forKey:nil];  
  112.       
  113.     [self performSelector:@selector(removeLayer:) withObject:layer afterDelay:1.5];  
  114. }  
  115.   
  116. - (void)removeLayer:(CALayer *)layer  
  117. {  
  118.     [layer removeFromSuperlayer];  
  119. }  
  120.   
  121.   
  122. - (void)delayAnimation  
  123. {  
  124.     [self startAnimation];  
  125. }  

完成以上操作,就大功告成了!

温馨提示:

1.由于是语音识别,需要用到麦克风相关权限,模拟器会爆12个错误,使用真机可以解决;

2.涉及到授权文件相关并不复杂,工程Bundle Identifier只需要设置百度的离线授权一致即可,如下图:


最终效果如下:

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
1. 引入SDK 在podfile文件添加以下代码: ``` pod 'QCloudAiSDK' ``` 执行pod install命令,导入SDK。如果有问题可以尝试更新pod库: ``` pod repo update ``` 2. 获取API密钥 在腾讯云官网控制台开通语音识别服务,并获取API密钥。 3. 引入头文件 在需要使用语音识别功能的文件引入头文件: ```objective-c #import <QCloudCore/QCloudCore.h> #import <QCloudAiPlatformSDK/QCloudAiPlatformSDK.h> ``` 4. 配置SDK参数 创建一个QCloudAuthentationV2对象,并传入API密钥: ```objective-c QCloudCredential* credential = [QCloudCredential new]; credential.secretID = @"您的API密钥ID"; credential.secretKey = @"您的API密钥Key"; credential.expirationDate = [NSDate dateWithTimeIntervalSinceNow:3600*24*30]; QCloudServiceConfiguration* configuration = [QCloudServiceConfiguration new]; configuration.appID = @"您的APPID"; configuration.regionName = @"ap-guangzhou"; configuration.credential = credential; ``` 修改appID和regionName为对应的信息。 5. 调用语音识别API 创建一个QCloudGetRecognitionResultRequest对象,并传入需要识别的音频文件路径和语音识别接口的配置: ```objective-c QCloudGetRecognitionResultRequest* recognitionRequest = [QCloudGetRecognitionResultRequest new]; recognitionRequest.filePath = @"音频文件路径"; recognitionRequest.engineModelType = QCloudASREngineModelType16k_qc; recognitionRequest.voiceFormat = QCloudASRVoiceFormat_MP3; recognitionRequest.hotwordId = @"0"; recognitionRequest.enableFlush = YES; recognitionRequest.enableVad = YES; recognitionRequest.workMode = QCloudASRWorkModeRecognition; recognitionRequest.filterDirty = NO; recognitionRequest.filterModal = NO; recognitionRequest.filterPunc = NO; recognitionRequest.convertNumMode = QCloudASRConvertNumModeWord; recognitionRequest.queryType = QCloudASRQueryTypeJSON; recognitionRequest.channelNum = 1; recognitionRequest.resType = QCloudASRResType16k; recognitionRequest.source = @"sdk"; [[QCloudAiPlatformOCRService sharedInstance] GetRecognitionResult:recognitionRequest appid:@"您的APPID" region:@"ap-guangzhou" configuration:configuration withCompletionHandler:^(id outputObject, NSError *error) { NSLog(@"output: %@, error: %@", outputObject, error); }]; ``` 注释有每个参数的说明,根据需要调整。 6. 完整代码 ```objective-c #import "ViewController.h" #import <QCloudCore/QCloudCore.h> #import <QCloudAiPlatformSDK/QCloudAiPlatformSDK.h> @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; [self requestSpeechRecognition]; } - (void)requestSpeechRecognition { QCloudCredential* credential = [QCloudCredential new]; credential.secretID = @"您的API密钥ID"; credential.secretKey = @"您的API密钥Key"; credential.expirationDate = [NSDate dateWithTimeIntervalSinceNow:3600*24*30]; QCloudServiceConfiguration* configuration = [QCloudServiceConfiguration new]; configuration.appID = @"您的APPID"; configuration.regionName = @"ap-guangzhou"; configuration.credential = credential; QCloudGetRecognitionResultRequest* recognitionRequest = [QCloudGetRecognitionResultRequest new]; recognitionRequest.filePath = @"音频文件路径"; recognitionRequest.engineModelType = QCloudASREngineModelType16k_qc; recognitionRequest.voiceFormat = QCloudASRVoiceFormat_MP3; recognitionRequest.hotwordId = @"0"; recognitionRequest.enableFlush = YES; recognitionRequest.enableVad = YES; recognitionRequest.workMode = QCloudASRWorkModeRecognition; recognitionRequest.filterDirty = NO; recognitionRequest.filterModal = NO; recognitionRequest.filterPunc = NO; recognitionRequest.convertNumMode = QCloudASRConvertNumModeWord; recognitionRequest.queryType = QCloudASRQueryTypeJSON; recognitionRequest.channelNum = 1; recognitionRequest.resType = QCloudASRResType16k; recognitionRequest.source = @"sdk"; [[QCloudAiPlatformOCRService sharedInstance] GetRecognitionResult:recognitionRequest appid:@"您的APPID" region:@"ap-guangzhou" configuration:configuration withCompletionHandler:^(id outputObject, NSError *error) { NSLog(@"output: %@, error: %@", outputObject, error); }]; } @end ``` 在上述代码,获取API密钥的部分需要替换成您的实际信息,其他部分根据需要调整。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值