Windows Speech Recognition App

Open the Control Panel (icons view), and click/tap on the Speech Recognition icon. Click/tap on the Start Speech Recognition link. (see screenshot below). You can now close the Speech Recognition control panel if you like.

-->

Sample application for speech recognition in windows store apps. The Windows Phone 8 SDK added a speech recognition API that's easy to use and flexible. In this article, I'll focus on the speech recognition API introduced in Windows Phone 8. There are two speech components commonly used in most applications: Text-to-Speech (TTS). How to Set Up Speech Recognition in Windows 10 Windows Speech Recognition lets you control your PC with your voice alone, without needing a keyboard or mouse. Using only your voice, you can open menus, click buttons and other objects on the screen, dictate text into documents, and write and send emails.

Use speech recognition to provide input, specify an action or command, and accomplish tasks.

Important APIs: Windows.Media.SpeechRecognition

Speech recognition is made up of a speech runtime, recognition APIs for programming the runtime, ready-to-use grammars for dictation and web search, and a default system UI that helps users discover and use speech recognition features.

Configure speech recognition

To support speech recognition with your app, the user must connect and enable a microphone on their device, and accept the Microsoft Privacy Policy granting permission for your app to use it.

To automatically prompt the user with a system dialog requesting permission to access and use the microphone's audio feed (example from the Speech recognition and speech synthesis sample shown below), just set the Microphonedevice capability in the App package manifest. For more detail, see App capability declarations.

If the user clicks Yes to grant access to the microphone, your app is added to the list of approved applications on the Settings -> Privacy -> Microphone page. However, as the user can choose to turn this setting off at any time, you should confirm that your app has access to the microphone before attempting to use it.

If you also want to support dictation, Cortana, or other speech recognition services (such as a predefined grammar defined in a topic constraint), you must also confirm that Online speech recognition (Settings -> Privacy -> Speech) is enabled.

This snippet shows how your app can check if a microphone is present and if it has permission to use it.

Recognize speech input

A constraint defines the words and phrases (vocabulary) that an app recognizes in speech input. Constraints are at the core of speech recognition and give your app greater over the accuracy of speech recognition.

You can use the following types of constraints for recognizing speech input.

Predefined grammars

Predefined dictation and web-search grammars provide speech recognition for your app without requiring you to author a grammar. When using these grammars, speech recognition is performed by a remote web service and the results are returned to the device.

The default free-text dictation grammar can recognize most words and phrases that a user can say in a particular language, and is optimized to recognize short phrases. The predefined dictation grammar is used if you don't specify any constraints for your SpeechRecognizer object. Free-text dictation is useful when you don't want to limit the kinds of things a user can say. Typical uses include creating notes or dictating the content for a message.

The web-search grammar, like a dictation grammar, contains a large number of words and phrases that a user might say. However, it is optimized to recognize terms that people typically use when searching the web.

Note Because predefined dictation and web-search grammars can be large, and because they are online (not on the device), performance might not be as fast as with a custom grammar installed on the device.

These predefined grammars can be used to recognize up to 10 seconds of speech input and require no authoring effort on your part. However, they do require a connection to a network.

To use web-service constraints, speech input and dictation support must be enabled in Settings by turning on the 'Get to know me' option in Settings -> Privacy -> Speech, inking, and typing.

Here, we show how to test whether speech input is enabled and open the Settings -> Privacy -> Speech, inking, and typing page, if not.

First, we initialize a global variable (HResultPrivacyStatementDeclined) to the HResult value of 0x80045509. See Exception handling for in C# or Visual Basic.

We then catch any standard exceptions during recogntion and test if the HResult value is equal to the value of the HResultPrivacyStatementDeclined variable. If so, we display a warning and call await Windows.System.Launcher.LaunchUriAsync(new Uri('ms-settings:privacy-accounts')); to open the Settings page.

See SpeechRecognitionTopicConstraint.

Programmatic list constraints

Programmatic list constraints provide a lightweight approach to creating simple grammars using a list of words or phrases. A list constraint works well for recognizing short, distinct phrases. Explicitly specifying all words in a grammar also improves recognition accuracy, as the speech recognition engine must only process speech to confirm a match. The list can also be programmatically updated.

A list constraint consists of an array of strings that represents speech input that your app will accept for a recognition operation. You can create a list constraint in your app by creating a speech-recognition list-constraint object and passing an array of strings. Then, add that object to the constraints collection of the recognizer. Recognition is successful when the speech recognizer recognizes any one of the strings in the array.

See SpeechRecognitionListConstraint.

SRGS grammars

An Speech Recognition Grammar Specification (SRGS) grammar is a static document that, unlike a programmatic list constraint, uses the XML format defined by the SRGS Version 1.0. An SRGS grammar provides the greatest control over the speech recognition experience by letting you capture multiple semantic meanings in a single recognition.

See SpeechRecognitionGrammarFileConstraint.

Voice command constraints

Use a Voice Command Definition (VCD) XML file to define the commands that the user can say to initiate actions when activating your app. For more detail, see Activate a foreground app with voice commands through Cortana.

See SpeechRecognitionVoiceCommandDefinitionConstraint/

Note The type of constraint type you use depends on the complexity of the recognition experience you want to create. Any could be the best choice for a specific recognition task, and you might find uses for all types of constraints in your app.To get started with constraints, see Define custom recognition constraints.

The predefined Universal Windows app dictation grammar recognizes most words and short phrases in a language. It is activated by default when a speech recognizer object is instantiated without custom constraints.

In this example, we show how to:

  • Create a speech recognizer.
  • Compile the default Universal Windows app constraints (no grammars have been added to the speech recognizer's grammar set).
  • Start listening for speech by using the basic recognition UI and TTS feedback provided by the RecognizeWithUIAsync method. Use the RecognizeAsync method if the default UI is not required.

Customize the recognition UI

When your app attempts speech recognition by calling SpeechRecognizer.RecognizeWithUIAsync, several screens are shown in the following order.

If you're using a constraint based on a predefined grammar (dictation or web search):

  • The Listening screen.
  • The Thinking screen.
  • The Heard you say screen or the error screen.

If you're using a constraint based on a list of words or phrases, or a constraint based on a SRGS grammar file:

  • The Listening screen.
  • The Did you say screen, if what the user said could be interpreted as more than one potential result.
  • The Heard you say screen or the error screen.

The following image shows an example of the flow between screens for a speech recognizer that uses a constraint based on a SRGS grammar file. In this example, speech recognition was successful.

The Listening screen can provide examples of words or phrases that the app can recognize. Here, we show how to use the properties of the SpeechRecognizerUIOptions class (obtained by calling the SpeechRecognizer.UIOptions property) to customize content on the Listening screen.

Related articles

Developers

  • Speech interactionsDesigners
  • Speech design guidelinesSamples

In this post, we will walk you through the process of disabling Speech Recognition in Windows 10 v1809. Speech Recognition is a technology which is used for controlling computers using voice commands. With Speech Recognition you can say commands that the computer will respond to, and you can also dictate text to the computer, which eliminates the requirement of typing the words in any text editor or word processing software. Speech Recognition feature, allows you to communicate with your computer. You can improve your computers ability to better understand your own voice, to improve upon the diction accuracy. However, to improve its accuracy, you have to ‘train the feature‘. If you haven’t found its performance satisfactory, follow the instructions given below to disable it.

Disable Speech Recognition in Windows 10

To disable Speech Recognition in Windows 10, open Settings > Ease of Access > Speech, and toggle on or off Turn on Speech Recognition to enable or disable this feature.

Disable Online Speech Recognition feature

Online Speech Recognition lets you talk to Cortana and apps that use cloud-based speech recognition.

1] Via Settings

To disable Online Speech Recognition in Windows 10:

  1. Click on ‘Start’ and select ‘Settings’.
  2. Navigate to the ‘Privacy’ section.
  3. Switch to ‘Speech’ and from the right pane slide the toggle to turn off the feature under ‘Online Speech Recognition’.

Speech services exist on your device as well as in the cloud. It is because Microsoft collects essential information from these services to improve the user experience. So, to stop this, turn off the ‘Getting to know you’ option as well under ‘Inking and typing Personalization’.

2] Via Registry Editor

Open the ‘Run’ dialog box by pressing Windows+R in combination. In the empty field of the dialog box type ‘regedit’ and hit ‘Enter’.

App

Next, navigate to the following address-

HKEY_CURRENT_USERSoftwareMicrosoftSpeech_OneCoreSettingsOnlineSpeechPrivacy

Check the default value of HasAccepted in the right pane of the window.

  • HasAccepted = 1, indicates the Online Speech Recognition is enabled.

To disable the feature permanently, double-click on the key and change the D-word value from 1 to 0.

Kindly bear in mind, even if you are running 64-bit Windows, as in my case, you should still create a 32-bit DWORD value. Download chess game for computer.

Restart your computer again to allow the changes to take effect.

Hereafter, you should not find the Windows Speech Recognition feature enabled in Windows 10.

TIP: Download this tool to quickly find & fix Windows errors automatically

Windows Speech Recognition App For Windows 10

Related Posts: