开源的语音识别+TTS for iPhone(一)Welcome To OpenEars

openEars是一个开源的语音识别+TTS库,在iphone APP中有几个款用到了该库,最近了一次升级,提高了代码的效率,并升级到了xcode4。politepix网站有openEars教程.以下转载,找个时间再翻译翻译吧.

-------------------------------------------------------------------------------------------------------------------------------------------------

Welcome to OpenEars!

OpenEars is an open-source iOS library for implementing round-trip English language speech recognition and text-to-speech on the iPhone and iPad, which uses the CMU Pocketsphinx , CMU Flite , and MITLM libraries.

The current version of OpenEars is 0.91 .

This version has a number of changes under the hood and two API changes for existing API calls, so if you want to stick with the previous version 0.9.02 for now, you can still download it here and it contains all of the old support documents as PDFs as well. I’ll support 0.9.0.2 until it’s clear that 0.91 is as stable as 0.9.02 — please just identify which version you are using when seeking support.

OpenEars .91 can:

  • Listen continuously for speech on a background thread, while suspending or resuming speech processing on demand, all while using less than 8% CPU on average on a first-generation iPhone (decoding speech, text-to-speech, updating the UI and other intermittent functions use more CPU),
  • Use any of 8 voices for speech and switch between them on the fly,
  • Know whether headphones are plugged in and continue voice recognition during text-to-speech only when they are plugged in,
  • Support bluetooth audio devices (very experimental in this version),
  • Dispatch information to any part of your app about the results of speech recognition and speech, or changes in the state of the audio session (such as an incoming phone call or headphones being plugged in),
  • Deliver level metering for both speech input and speech output so you can design visual feedback for both states.
  • Support JSGF grammars,
  • Dynamically generate new ARPA language models in-app based on input from an NSArray of NSStrings,
  • Switch between ARPA language models on the fly,
  • Be easily interacted with via standard and simple Objective-C methods,
  • Control all audio functions with text-to-speech and speech recognition in memory instead of writing audio files to disk and then reading them,
  • Drive speech recognition with a low-latency Audio Unit driver for highest responsiveness,
  • Be installed in a Cocoa-standard fashion using static library projects that, after initial configuration, allow you to target or re-target any SDKs or architectures that are supported by the libraries (verified as going back to SDK 3.1.2 at least) by making changes to your main project only.

In addition to its various new features and faster recognition/text-to-speech responsiveness, OpenEars now has improved recognition accuracy.

Before using OpenEars, please note that its new low-latency Audio Unit driver is not compatible with the Simulator, so it has a fallback Audio Queue driver for the Simulator provided as a convenience so you can debug recognition logic. This means is that recognition is better on the device, and that I’d appreciate it if bug reports are limited to issues which affect the device.

To use OpenEars:

1. Begin with “Getting Started With OpenEars ” which will explain how to set up the libraries your app will make use of.

2. Then read “Configuring your app for OpenEars ” which will explain how to make the OpenEars libraries available to your app projects, and lastly,

3. You’ll be ready for “Using OpenEars In Your App ” which will explain the objects and methods that will be available to your app and how to use them.

If those steps give you trouble, you can check out the Support and FAQ page.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值