System.Speech.Recognition和Microsoft.Speech.Recognition之间的区别是什么?区别、Speech、System、Microsoft

2023-09-02 10:15:21 作者:忘东忘西一直忘不了你¢

有两个类似的命名空间,并在.NET语音识别组件。我想了解这些差异,当它是适合使用一个或另一个。

There are two similar namespaces and assemblies for speech recognition in .NET. I’m trying to understand the differences and when it is appropriate to use one or the other.

有从组件System.Speech(在System.Speech.dll)System.Speech.Recognition。 System.Speech.dll是在.NET Framework类库核心DLL 3.0及更高版本

There is System.Speech.Recognition from the assembly System.Speech (in System.Speech.dll). System.Speech.dll is a core DLL in the .NET Framework class library 3.0 and later

还有从组件Microsoft.Speech(在microsoft.speech.dll)Microsoft.Speech.Recognition。 Microsoft.Speech.dll是UCMA 2.0 SDK的一部分

There is also Microsoft.Speech.Recognition from the assembly Microsoft.Speech (in microsoft.speech.dll). Microsoft.Speech.dll is part of the UCMA 2.0 SDK

我找到文档混乱,我有以下问题:

I find the docs confusing and I have the following questions:

System.Speech.Recognition说,这是为The Windows桌面语音技术,这是否意味着它不能被用来在服务器操作系统上或者不能用于高规模的应用?

System.Speech.Recognition says it is for "The Windows Desktop Speech Technology", does this mean it cannot be used on a server OS or cannot be used for high scale applications?

在UCMA 2.0语音SDK(http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx )说,这需要微软的Office Communications Server 2007 R2为prerequisite。不过,我一直在说,在各种会议,如果我不需要像presence和工作流程OCS功能,我可以使用UCMA 2.0语音API,而OCS。这是真的吗?

The UCMA 2.0 Speech SDK ( http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx ) says that it requires Microsoft Office Communications Server 2007 R2 as a prerequisite. However, I’ve been told at conferences and meetings that if I do not require OCS features like presence and workflow I can use the UCMA 2.0 Speech API without OCS. Is this true?

如果我建立了一个简单的识别应用程序服务器应用程序(比如我想自动转录语音邮件),我不需要OCS的功能,哪些是两个API之间的差异?

If I’m building a simple recognition app for a server application (say I wanted to automatically transcribe voice mails) and I don’t need features of OCS, what are the differences between the two APIs?

推荐答案

简短的回答是,Microsoft.Speech.Recognition使用SAPI的服务器版本,而System.Speech.Recognition采用了桌面版本的SAPI的。

The short answer is that Microsoft.Speech.Recognition uses the Server version of SAPI, while System.Speech.Recognition uses the Desktop version of SAPI.

的API有大致相同,但底层引擎是不同的。通常情况下,服务器引擎被设计为接受电话质量的音频,命令和放大器;控制应用;桌面引擎的目的是接受更高质量的音频为命令和放大器;控制和听写应用程序。

The APIs are mostly the same, but the underlying engines are different. Typically, the Server engine is designed to accept telephone-quality audio for command & control applications; the Desktop engine is designed to accept higher-quality audio for both command & control and dictation applications.

您可以在服务器操作系统上使用System.Speech.Recognition,但它不是设计规模几乎和Microsoft.Speech.Recognition。

You can use System.Speech.Recognition on a server OS, but it's not designed to scale nearly as well as Microsoft.Speech.Recognition.

区别在于所述服务器引擎将不需要训练,和将与低质量的音频,但是将具有比台式发动机下部的识别质量。

The differences are that the Server engine won't need training, and will work with lower-quality audio, but will have a lower recognition quality than the Desktop engine.

 
精彩推荐
图片推荐