【Android】科大讯飞——语音识别 解决"SpeechRecognizer.createRecognizer()获取的SpeechRecognizer对象为null"问题

目录

一、问题重现 

二、原因分析

三、解决方法 


在集成讯飞语音识别SDK过程中,遇到了“SpeechRecognizer.createRecognizer()获取的SpeechRecognizer对象为null”的问题,上网百度,有的说是权限没给,还有的说是没联网等等,但都没有解决问题,后来一番折腾捣鼓,找到了原因以及解决方案。

一、问题重现 

1、下载完SDK解压后,有如图所示的文件。其中lIbs目录下的.so库和Jar包是我在项目中需要集成的,于是我直接将libs目录下的文件拷贝到了项目的libs目录下。

讯飞SDK

SDK-LIBS

x项目结构

2、然后在运行获取SpeechRecognizer对象时,却是null的。初始化代码以及log打印结果如下图。

 /**
     * 初始化语音识别对象
     */
    private void initVoiceRecognize() {
        //获取系统默认语言
        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
            locale = LocaleList.getDefault().get(0);
        } else {
            locale = Locale.getDefault();
        }
        language = locale.getLanguage() + "-" + locale.getCountry();
        Log.d(TAG, "系统默认language:"+language);


        // 初始化识别无UI识别对象
        //使用SpeechRecognizer对象,可根据回调消息自定义界面;
        speechRecognizer = SpeechRecognizer.createRecognizer(MainActivity.this, initListener);
        if(speechRecognizer != null) {
            //设置返回结果格式,目前支持json,xml以及plain 三种格式,其中plain为纯听写文本内容
            speechRecognizer.setParameter(SpeechConstant.RESULT_TYPE, "plain");
            // 设置语音前端点:静音超时时间,单位ms,即用户多长时间不说话则当做超时处理
            //取值范围{1000~10000}
            speechRecognizer.setParameter(SpeechConstant.VAD_BOS, "4000");
            //设置语音后端点:后端点静音检测时间,单位ms,即用户停止说话多长时间内即认为不再输入,
            //自动停止录音,范围{0~10000}
            speechRecognizer.setParameter(SpeechConstant.VAD_EOS, "1000");
            //设置语音输入语言,zh_cn为简体中文,en_us为美式英文
            if (language.equalsIgnoreCase("zh-CN")) {
                // 设置语言
                speechRecognizer.setParameter(SpeechConstant.LANGUAGE, "zh_cn");
            } else {
                // 设置语言
                speechRecognizer.setParameter(SpeechConstant.LANGUAGE, "en_us");
            }

            Log.d(TAG, "语音识别对象完成初始化");
        }else{
            Log.d(TAG, "语音识别对象 == null");
        }
    }

log打印

二、原因分析

在Android Studio中,引用.so库,会默认匹配main下的jniLibs目录,如果没有jniLibs目录需要自己手动创建,把.so库放到jniLibs目录下。

如果想在Android Studio中使用libs下的库,则需要手动去指定库的位置。

注意:两种方式引入.so库,库的名称都不能随便更改。

三、解决方法 

1、方法一:语音识别SDK中的.so库和Jar包直接放在项目的libs目录下,同时在build.gradle文件中手动去指定库的位置。

项目-libs

android {
    ......

    //Android Studio中,使用libs下的.so库,需要手动去指定库的位置
    sourceSets {
        main {
            jniLibs.srcDirs = ['libs']
        }
    }

}

2、方法二:在main目录下新建jniLibs目录,然后分别将语音识别SDK中的Jar包直接放在项目的libs目录下,.so库放到jniLibs目录下。

新建jniLibs

 jniLibs

jniLibs-libs

好了,终于解决完这个问题了。最后附上一篇对解决该问题有启发作用的博客。

Android关于libs,jniLibs库的基本使用说明及冲突解决

About Wouldn't your prefer to let your users speak instead of making them type? This plugin uses OS components for speech recognition and send it to your Unity scripts as String objects. Plugin supports: - Android >= 3.0 (haven’t tested below, it might work though… ), - iOS >= 10.0. That doesn’t mean you can’t target iOS lower than 10 - you simply have to prepare fallback code to cover cases when user doesn’t have access to speech recognition (SpeechRecognizer.EngineExists() for the help!). Keep in mind that both iOS and Android might use Internet connection for speech detection, which means it might fail in case there’s no active connection. Plugin doesn’t work in Editor! You have to run your app on real iOS or Android device. MOBILE SPEECH RECOGNIZER - UNITY PLUGIN ?2 Quick Start Open example scene Go to KKSpeechRecognizer/Example folder inside Unity and open ExampleScene: It shows basic usage of a plugin, which is: 1. Detecting if speech recognition exists on user’s device (keep in mind that it won’t be available on e.g. iOS 9 or old Android phones), 2. If it exists, and user clicks on “Start Recording” button it listens for recognized text and displays it on a screen, 3. On Android, speech recognition automatically detects when user finishes speaking, but on iOS we have to wait for user clicking “Stop Recording” to finish whole process (i.e. get final results). Before running it on Android or iOS device you have to… Setup permissions iOS After generating Xcode project (keep in mind that you have to use Xcode 8 or higher) you have to add two permissions keys to your project: MOBILE SPEECH RECOGNIZER - UNITY PLUGIN ?3 NSMicrophoneUsageDescription explanation from Apple docs: This key lets you describe the reason your app accesses any of the the device’s microphones. When the system prompts the user to allow access, this string is displayed as part of the alert. NSSpeechRecognitionUsageDescription explanation from Apple docs: This key lets you describe the reason y
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值