附:蘋果官方文檔
Speech Recognition
iOS 10 introduces a new API that supports continuous speech recognition and helps you build apps that can recognize speech and transcribe it into text. Using the APIs in the Speech framework (Speech.framework), you can perform speech transcription of both real-time and recorded audio. For example, you can get a speech recognizer and start simple speech recognition using code like this:
let recognizer = SFSpeechRecognizer()
let request = SFSpeechURLRecognitionRequest(url: audioFileURL)
recognizer?.recognitionTask(with: request, resultHandler: { (result, error) in
print (result?.bestTranscription.formattedString)
})
As with accessing other types of protected data, such as Calendar and Photos data, performing speech recognition requires the user’s permission (for more information about accessing protected data classes, seeSecurity and Privacy Enhancements). In the case of speech recognition, permission is required because data is transmitted and temporarily stored on Apple’s servers to increase the accuracy of recognition. To request the user’s permission, you must add theNSSpeechRecognitionUsageDescriptionkey to your app’sInfo.plistfile and provide content that describes your app’s usage.
When you adopt speech recognition in your app, be sure to indicate to users when their speech is being recognized so that they can avoid making sensitive utterances at that time.
在Iphone7/7P火爆的同時裁眯,IOS10.0系統(tǒng)新增了多中特性,比如:語音識別的API。那我應(yīng)該如何使用吶,上面已經(jīng)貼出了蘋果官方的解釋(讀官方文檔是個好習(xí)慣吶,雖然看不懂)擅编。
使用Xcode8來體現(xiàn)這個API的時候,*必須先配置一下工程的Info.plist文件,因為更新后蘋果對于安全性的提高那可不是一丁半點祭阀。我們需要授權(quán)一下兩個屬性
NSSpeechRecognitionUsageDescription
NSMicrophoneUsageDescription
Speech 具有連續(xù)的語音識別鹉戚、對語音文件以及語音流的識別、并且支持多語言的聽寫(此時的我专控,為科大訊飛捏了把汗)
如要體驗這個強大的原生語音識別抹凳,我們需要在工程中導(dǎo)入 #import<Speech/Speech.h>
核心代碼:
#import?
//1.創(chuàng)建本地化標(biāo)識符
NSLocale*local?=[[NSLocale?alloc]?initWithLocaleIdentifier:@"zh_CN"];
//2.創(chuàng)建一個語音識別對象
SFSpeechRecognizer*sf?=[[SFSpeechRecognizer?alloc]?initWithLocale:local];
//3.將bundle中的資源文件加載出來返回一個url
NSURL*url?=[[NSBundle?mainBundle]?URLForResource:@"斑馬.mp3"?withExtension:nil];
//4.將資源包中獲取的url(錄音文件的地址)傳遞給request對象
SFSpeechURLRecognitionRequest*res?=[[SFSpeechURLRecognitionRequest?alloc]?initWithURL:url];
//5.發(fā)送一個請求
[sf?recognitionTaskWithRequest:res?resultHandler:^(SFSpeechRecognitionResult*?_Nullable?result,NSError*?_Nullable?error)?{
if(error!=nil)?{
NSLog(@"語音識別解析失敗,%@",error);
}
else
{
//解析正確
NSLog(@"---%@",result.bestTranscription.formattedString);
}
}];
持續(xù)更新,后續(xù)上DEMO伦腐。