Google.Cloud.Speech.V1
Holder for reflection information generated from google/cloud/speech/v1/cloud_speech.proto
File descriptor for google/cloud/speech/v1/cloud_speech.proto
The top-level message sent by the client for the `Recognize` method.
Field number for the "config" field.
*Required* Provides information to the recognizer that specifies how to
process the request.
Field number for the "audio" field.
*Required* The audio data to be recognized.
The top-level message sent by the client for the `LongRunningRecognize`
method.
Field number for the "config" field.
*Required* Provides information to the recognizer that specifies how to
process the request.
Field number for the "audio" field.
*Required* The audio data to be recognized.
The top-level message sent by the client for the `StreamingRecognize` method.
Multiple `StreamingRecognizeRequest` messages are sent. The first message
must contain a `streaming_config` message and must not contain `audio` data.
All subsequent messages must contain `audio` data and must not contain a
`streaming_config` message.
Field number for the "streaming_config" field.
Provides information to the recognizer that specifies how to process the
request. The first `StreamingRecognizeRequest` message must contain a
`streaming_config` message.
Field number for the "audio_content" field.
The audio data to be recognized. Sequential chunks of audio data are sent
in sequential `StreamingRecognizeRequest` messages. The first
`StreamingRecognizeRequest` message must not contain `audio_content` data
and all subsequent `StreamingRecognizeRequest` messages must contain
`audio_content` data. The audio bytes must be encoded as specified in
`RecognitionConfig`. Note: as with all bytes fields, protobuffers use a
pure binary representation (not base64). See
[content limits](/speech-to-text/quotas#content).
Enum of possible cases for the "streaming_request" oneof.
Provides information to the recognizer that specifies how to process the
request.
Field number for the "config" field.
*Required* Provides information to the recognizer that specifies how to
process the request.
Field number for the "single_utterance" field.
*Optional* If `false` or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
`StreamingRecognitionResult`s with the `is_final` flag set to `true`.
If `true`, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
`END_OF_SINGLE_UTTERANCE` event and cease recognition. It will return no
more than one `StreamingRecognitionResult` with the `is_final` flag set to
`true`.
Field number for the "interim_results" field.
*Optional* If `true`, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the `is_final=false` flag).
If `false` or omitted, only `is_final=true` result(s) are returned.
Provides information to the recognizer that specifies how to process the
request.
Field number for the "encoding" field.
Encoding of audio data sent in all `RecognitionAudio` messages.
This field is optional for `FLAC` and `WAV` audio files and required
for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1.RecognitionConfig.AudioEncoding].
Field number for the "sample_rate_hertz" field.
Sample rate in Hertz of the audio data sent in all
`RecognitionAudio` messages. Valid values are: 8000-48000.
16000 is optimal. For best results, set the sampling rate of the audio
source to 16000 Hz. If that's not possible, use the native sample rate of
the audio source (instead of re-sampling).
This field is optional for `FLAC` and `WAV` audio files and required
for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1.RecognitionConfig.AudioEncoding].
Field number for the "audio_channel_count" field.
*Optional* The number of channels in the input audio data.
ONLY set this for MULTI-CHANNEL recognition.
Valid values for LINEAR16 and FLAC are `1`-`8`.
Valid values for OGG_OPUS are '1'-'254'.
Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`.
If `0` or omitted, defaults to one channel (mono).
Note: We only recognize the first channel by default.
To perform independent recognition on each channel set
`enable_separate_recognition_per_channel` to 'true'.
Field number for the "enable_separate_recognition_per_channel" field.
This needs to be set to `true` explicitly and `audio_channel_count` > 1
to get each channel recognized separately. The recognition result will
contain a `channel_tag` field to state which channel that result belongs
to. If this is not true, we will only recognize the first channel. The
request is billed cumulatively for all channels recognized:
`audio_channel_count` multiplied by the length of the audio.
Field number for the "language_code" field.
*Required* The language of the supplied audio as a
[BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag.
Example: "en-US".
See [Language Support](/speech-to-text/docs/languages)
for a list of the currently supported language codes.
Field number for the "max_alternatives" field.
*Optional* Maximum number of recognition hypotheses to be returned.
Specifically, the maximum number of `SpeechRecognitionAlternative` messages
within each `SpeechRecognitionResult`.
The server may return fewer than `max_alternatives`.
Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of
one. If omitted, will return a maximum of one.
Field number for the "profanity_filter" field.
*Optional* If set to `true`, the server will attempt to filter out
profanities, replacing all but the initial character in each filtered word
with asterisks, e.g. "f***". If set to `false` or omitted, profanities
won't be filtered out.
Field number for the "speech_contexts" field.
*Optional* array of [SpeechContext][google.cloud.speech.v1.SpeechContext].
A means to provide context to assist the speech recognition. For more
information, see [Phrase Hints](/speech-to-text/docs/basics#phrase-hints).
Field number for the "enable_word_time_offsets" field.
*Optional* If `true`, the top result includes a list of words and
the start and end time offsets (timestamps) for those words. If
`false`, no word-level time offset information is returned. The default is
`false`.
Field number for the "enable_automatic_punctuation" field.
*Optional* If 'true', adds punctuation to recognition result hypotheses.
This feature is only available in select languages. Setting this for
requests in other languages has no effect at all.
The default 'false' value does not add punctuation to result hypotheses.
Note: This is currently offered as an experimental service, complimentary
to all users. In the future this may be exclusively available as a
premium feature.
Field number for the "model" field.
*Optional* Which model to select for the given request. Select the model
best suited to your domain to get best results. If a model is not
explicitly specified, then we auto-select a model based on the parameters
in the RecognitionConfig.
<table>
<tr>
<td><b>Model</b></td>
<td><b>Description</b></td>
</tr>
<tr>
<td><code>command_and_search</code></td>
<td>Best for short queries such as voice commands or voice search.</td>
</tr>
<tr>
<td><code>phone_call</code></td>
<td>Best for audio that originated from a phone call (typically
recorded at an 8khz sampling rate).</td>
</tr>
<tr>
<td><code>video</code></td>
<td>Best for audio that originated from from video or includes multiple
speakers. Ideally the audio is recorded at a 16khz or greater
sampling rate. This is a premium model that costs more than the
standard rate.</td>
</tr>
<tr>
<td><code>default</code></td>
<td>Best for audio that is not one of the specific audio models.
For example, long-form audio. Ideally the audio is high-fidelity,
recorded at a 16khz or greater sampling rate.</td>
</tr>
</table>
Field number for the "use_enhanced" field.
*Optional* Set to true to use an enhanced model for speech recognition.
If `use_enhanced` is set to true and the `model` field is not set, then
an appropriate enhanced model is chosen if:
1. project is eligible for requesting enhanced models
2. an enhanced model exists for the audio
If `use_enhanced` is true and an enhanced version of the specified model
does not exist, then the speech is recognized using the standard version
of the specified model.
Enhanced speech models require that you opt-in to data logging using
instructions in the
[documentation](/speech-to-text/docs/enable-data-logging). If you set
`use_enhanced` to true and you have not enabled audio logging, then you
will receive an error.
Container for nested types declared in the RecognitionConfig message type.
The encoding of the audio data sent in the request.
All encodings support only 1 channel (mono) audio.
For best results, the audio source should be captured and transmitted using
a lossless encoding (`FLAC` or `LINEAR16`). The accuracy of the speech
recognition can be reduced if lossy codecs are used to capture or transmit
audio, particularly if background noise is present. Lossy codecs include
`MULAW`, `AMR`, `AMR_WB`, `OGG_OPUS`, and `SPEEX_WITH_HEADER_BYTE`.
The `FLAC` and `WAV` audio file formats include a header that describes the
included audio content. You can request recognition for `WAV` files that
contain either `LINEAR16` or `MULAW` encoded audio.
If you send `FLAC` or `WAV` audio file format in
your request, you do not need to specify an `AudioEncoding`; the audio
encoding format is determined from the file header. If you specify
an `AudioEncoding` when you send send `FLAC` or `WAV` audio, the
encoding configuration must match the encoding described in the audio
header; otherwise the request returns an
[google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT] error code.
Not specified.
Uncompressed 16-bit signed little-endian samples (Linear PCM).
`FLAC` (Free Lossless Audio
Codec) is the recommended encoding because it is
lossless--therefore recognition is not compromised--and
requires only about half the bandwidth of `LINEAR16`. `FLAC` stream
encoding supports 16-bit and 24-bit samples, however, not all fields in
`STREAMINFO` are supported.
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
Adaptive Multi-Rate Narrowband codec. `sample_rate_hertz` must be 8000.
Adaptive Multi-Rate Wideband codec. `sample_rate_hertz` must be 16000.
Opus encoded audio frames in Ogg container
([OggOpus](https://wiki.xiph.org/OggOpus)).
`sample_rate_hertz` must be one of 8000, 12000, 16000, 24000, or 48000.
Although the use of lossy encodings is not recommended, if a very low
bitrate encoding is required, `OGG_OPUS` is highly preferred over
Speex encoding. The [Speex](https://speex.org/) encoding supported by
Cloud Speech API has a header byte in each block, as in MIME type
`audio/x-speex-with-header-byte`.
It is a variant of the RTP Speex encoding defined in
[RFC 5574](https://tools.ietf.org/html/rfc5574).
The stream is a sequence of blocks, one block per RTP packet. Each block
starts with a byte containing the length of the block, in bytes, followed
by one or more frames of Speex data, padded to an integral number of
bytes (octets) as specified in RFC 5574. In other words, each RTP header
is replaced with a single byte containing the block length. Only Speex
wideband is supported. `sample_rate_hertz` must be 16000.
Provides "hints" to the speech recognizer to favor specific words and phrases
in the results.
Field number for the "phrases" field.
*Optional* A list of strings containing words and phrases "hints" so that
the speech recognition is more likely to recognize them. This can be used
to improve the accuracy for specific words and phrases, for example, if
specific commands are typically spoken by the user. This can also be used
to add additional words to the vocabulary of the recognizer. See
[usage limits](/speech-to-text/quotas#content).
Contains audio data in the encoding specified in the `RecognitionConfig`.
Either `content` or `uri` must be supplied. Supplying both or neither
returns [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]. See
[content limits](/speech-to-text/quotas#content).
Field number for the "content" field.
The audio data bytes encoded as specified in
`RecognitionConfig`. Note: as with all bytes fields, protobuffers use a
pure binary representation, whereas JSON representations use base64.
Field number for the "uri" field.
URI that points to a file that contains audio data bytes as specified in
`RecognitionConfig`. The file must not be compressed (for example, gzip).
Currently, only Google Cloud Storage URIs are
supported, which must be specified in the following format:
`gs://bucket_name/object_name` (other URI formats return
[google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see
[Request URIs](https://cloud.google.com/storage/docs/reference-uris).
Enum of possible cases for the "audio_source" oneof.
Constructs a with a property referring to a Google Cloud
Storage URI.
A Google Cloud Storage URI, of the form gs://bucket-name/object-name. Must not be null.
The newly created RecognitionAudio.
Asynchronously constructs a by downloading data from the given URI.
The URI to fetch. Must not be null.
The to use to fetch the image, or
null to use a default client.
A task representing the asynchronous operation. The result will be the newly created RecognitionAudio.
Asynchronously constructs a by downloading data from the given URI.
The URI to fetch. Must not be null.
The to use to fetch the image, or
null to use a default client.
A task representing the asynchronous operation. The result will be the newly created RecognitionAudio.
Constructs a by downloading data from the given URI.
The URI to fetch. Must not be null.
The to use to fetch the image, or
null to use a default client.
The newly created RecognitionAudio.
Constructs a by downloading data from the given URI.
The URI to fetch. Must not be null.
The to use to fetch the image, or
null to use a default client.
The newly created RecognitionAudio.
Constructs a by loading data from the given file path.
The file path to load RecognitionAudio data from. Must not be null.
The newly created RecognitionAudio.
Asynchronously constructs a by loading data from the given file path.
The file path to load RecognitionAudio data from. Must not be null.
The newly created RecognitionAudio.
Constructs a by loading data from the given stream.
The stream to load RecognitionAudio data from. Must not be null.
The newly created RecognitionAudio.
Asynchronously constructs a by loading data from the given stream.
The stream to load RecognitionAudio data from. Must not be null.
The newly created RecognitionAudio.
Constructs a from the given byte array.
This method copies the data from the byte array; modifications to
after this method returns will not be reflected in the RecognitionAudio.
The bytes representing the raw RecognitionAudio data.
The newly created RecognitionAudio.
Constructs a from a section of the given byte array.
This method copies the data from the byte array; modifications to
after this method returns will not be reflected in the RecognitionAudio.
The bytes representing the raw RecognitionAudio data.
The offset into the byte array of the start of the data to include in the RecognitionAudio.
The number of bytes to include in the RecognitionAudio.
The newly created RecognitionAudio.
The only message returned to the client by the `Recognize` method. It
contains the result as zero or more sequential `SpeechRecognitionResult`
messages.
Field number for the "results" field.
Output only. Sequential list of transcription results corresponding to
sequential portions of audio.
The only message returned to the client by the `LongRunningRecognize` method.
It contains the result as zero or more sequential `SpeechRecognitionResult`
messages. It is included in the `result.response` field of the `Operation`
returned by the `GetOperation` call of the `google::longrunning::Operations`
service.
Field number for the "results" field.
Output only. Sequential list of transcription results corresponding to
sequential portions of audio.
Describes the progress of a long-running `LongRunningRecognize` call. It is
included in the `metadata` field of the `Operation` returned by the
`GetOperation` call of the `google::longrunning::Operations` service.
Field number for the "progress_percent" field.
Approximate percentage of audio processed thus far. Guaranteed to be 100
when the audio is fully processed and the results are available.
Field number for the "start_time" field.
Time when the request was received.
Field number for the "last_update_time" field.
Time of the most recent processing update.
`StreamingRecognizeResponse` is the only message returned to the client by
`StreamingRecognize`. A series of zero or more `StreamingRecognizeResponse`
messages are streamed back to the client. If there is no recognizable
audio, and `single_utterance` is set to false, then no messages are streamed
back to the client.
Here's an example of a series of ten `StreamingRecognizeResponse`s that might
be returned while processing audio:
1. results { alternatives { transcript: "tube" } stability: 0.01 }
2. results { alternatives { transcript: "to be a" } stability: 0.01 }
3. results { alternatives { transcript: "to be" } stability: 0.9 }
results { alternatives { transcript: " or not to be" } stability: 0.01 }
4. results { alternatives { transcript: "to be or not to be"
confidence: 0.92 }
alternatives { transcript: "to bee or not to bee" }
is_final: true }
5. results { alternatives { transcript: " that's" } stability: 0.01 }
6. results { alternatives { transcript: " that is" } stability: 0.9 }
results { alternatives { transcript: " the question" } stability: 0.01 }
7. results { alternatives { transcript: " that is the question"
confidence: 0.98 }
alternatives { transcript: " that was the question" }
is_final: true }
Notes:
- Only two of the above responses #4 and #7 contain final results; they are
indicated by `is_final: true`. Concatenating these together generates the
full transcript: "to be or not to be that is the question".
- The others contain interim `results`. #3 and #6 contain two interim
`results`: the first portion has a high stability and is less likely to
change; the second portion has a low stability and is very likely to
change. A UI designer might choose to show only high stability `results`.
- The specific `stability` and `confidence` values shown above are only for
illustrative purposes. Actual values may vary.
- In each response, only one of these fields will be set:
`error`,
`speech_event_type`, or
one or more (repeated) `results`.
Field number for the "error" field.
Output only. If set, returns a [google.rpc.Status][google.rpc.Status] message that
specifies the error for the operation.
Field number for the "results" field.
Output only. This repeated list contains zero or more results that
correspond to consecutive portions of the audio currently being processed.
It contains zero or one `is_final=true` result (the newly settled portion),
followed by zero or more `is_final=false` results (the interim results).
Field number for the "speech_event_type" field.
Output only. Indicates the type of speech event.
Container for nested types declared in the StreamingRecognizeResponse message type.
Indicates the type of speech event.
No speech event specified.
This event indicates that the server has detected the end of the user's
speech utterance and expects no additional speech. Therefore, the server
will not process additional audio (although it may subsequently return
additional results). The client should stop sending additional audio
data, half-close the gRPC connection, and wait for any additional results
until the server closes the gRPC connection. This event is only sent if
`single_utterance` was set to `true`, and is not used otherwise.
A streaming speech recognition result corresponding to a portion of the audio
that is currently being processed.
Field number for the "alternatives" field.
Output only. May contain one or more recognition hypotheses (up to the
maximum specified in `max_alternatives`).
These alternatives are ordered in terms of accuracy, with the top (first)
alternative being the most probable, as ranked by the recognizer.
Field number for the "is_final" field.
Output only. If `false`, this `StreamingRecognitionResult` represents an
interim result that may change. If `true`, this is the final time the
speech service will return this particular `StreamingRecognitionResult`,
the recognizer will not return any further hypotheses for this portion of
the transcript and corresponding audio.
Field number for the "stability" field.
Output only. An estimate of the likelihood that the recognizer will not
change its guess about this interim result. Values range from 0.0
(completely unstable) to 1.0 (completely stable).
This field is only provided for interim results (`is_final=false`).
The default of 0.0 is a sentinel value indicating `stability` was not set.
Field number for the "channel_tag" field.
For multi-channel audio, this is the channel number corresponding to the
recognized result for the audio from that channel.
For audio_channel_count = N, its output values can range from '1' to 'N'.
A speech recognition result corresponding to a portion of the audio.
Field number for the "alternatives" field.
Output only. May contain one or more recognition hypotheses (up to the
maximum specified in `max_alternatives`).
These alternatives are ordered in terms of accuracy, with the top (first)
alternative being the most probable, as ranked by the recognizer.
Field number for the "channel_tag" field.
For multi-channel audio, this is the channel number corresponding to the
recognized result for the audio from that channel.
For audio_channel_count = N, its output values can range from '1' to 'N'.
Alternative hypotheses (a.k.a. n-best list).
Field number for the "transcript" field.
Output only. Transcript text representing the words that the user spoke.
Field number for the "confidence" field.
Output only. The confidence estimate between 0.0 and 1.0. A higher number
indicates an estimated greater likelihood that the recognized words are
correct. This field is set only for the top alternative of a non-streaming
result or, of a streaming result where `is_final=true`.
This field is not guaranteed to be accurate and users should not rely on it
to be always provided.
The default of 0.0 is a sentinel value indicating `confidence` was not set.
Field number for the "words" field.
Output only. A list of word-specific information for each recognized word.
Note: When `enable_speaker_diarization` is true, you will see all the words
from the beginning of the audio.
Word-specific information for recognized words.
Field number for the "start_time" field.
Output only. Time offset relative to the beginning of the audio,
and corresponding to the start of the spoken word.
This field is only set if `enable_word_time_offsets=true` and only
in the top hypothesis.
This is an experimental feature and the accuracy of the time offset can
vary.
Field number for the "end_time" field.
Output only. Time offset relative to the beginning of the audio,
and corresponding to the end of the spoken word.
This field is only set if `enable_word_time_offsets=true` and only
in the top hypothesis.
This is an experimental feature and the accuracy of the time offset can
vary.
Field number for the "word" field.
Output only. The word corresponding to this set of information.
Service that implements Google Cloud Speech API.
Service descriptor
Base class for server-side implementations of Speech
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request received from the client.
The context of the server-side call handler being invoked.
The response to send back to the client (wrapped by a task).
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request received from the client.
The context of the server-side call handler being invoked.
The response to send back to the client (wrapped by a task).
Performs bidirectional streaming speech recognition: receive results while
sending audio. This method is only available via the gRPC API (not REST).
Used for reading requests from the client.
Used for sending responses back to the client.
The context of the server-side call handler being invoked.
A task indicating completion of the handler.
Client for Speech
Creates a new client for Speech
The channel to use to make remote calls.
Creates a new client for Speech that uses a custom CallInvoker.
The callInvoker to use to make remote calls.
Protected parameterless constructor to allow creation of test doubles.
Protected constructor to allow creation of configured clients.
The client configuration.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The response received from the server.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request to send to the server.
The options for the call.
The response received from the server.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The call object.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request to send to the server.
The options for the call.
The call object.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The response received from the server.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request to send to the server.
The options for the call.
The response received from the server.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request to send to the server.
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The call object.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request to send to the server.
The options for the call.
The call object.
Performs bidirectional streaming speech recognition: receive results while
sending audio. This method is only available via the gRPC API (not REST).
The initial metadata to send with the call. This parameter is optional.
An optional deadline for the call. The call will be cancelled if deadline is hit.
An optional token for canceling the call.
The call object.
Performs bidirectional streaming speech recognition: receive results while
sending audio. This method is only available via the gRPC API (not REST).
The options for the call.
The call object.
Creates a new instance of client from given ClientBaseConfiguration.
Creates a new instance of using the same call invoker as this client.
A new Operations client for the same target as this client.
Creates service definition that can be registered with a server
An object implementing the server-side handling logic.
A helper class forming a hierarchy of supported language codes, via nested classes.
All language codes are eventually represented as string constants. This is simply
a code-convenient form of the table at https://cloud.google.com/speech/docs/languages.
It is regenerated regularly, but not guaranteed to be complete at any moment in time;
if the language you wish to use is present in the table but not covered here, please use
the listed language code as a hard-coded string until this class catches up.
Language codes for Afrikaans.
Language code for Afrikaans (South Africa)
Language codes for Amharic.
Language code for Amharic (Ethiopia)
Language codes for Arabic.
Language code for Arabic (Algeria)
Language code for Arabic (Bahrain)
Language code for Arabic (Egypt)
Language code for Arabic (Iraq)
Language code for Arabic (Israel)
Language code for Arabic (Jordan)
Language code for Arabic (Kuwait)
Language code for Arabic (Lebanon)
Language code for Arabic (Morocco)
Language code for Arabic (Oman)
Language code for Arabic (Qatar)
Language code for Arabic (Saudi Arabia)
Language code for Arabic (State of Palestine)
Language code for Arabic (Tunisia)
Language code for Arabic (United Arab Emirates)
Language codes for Armenian.
Language code for Armenian (Armenia)
Language codes for Azerbaijani.
Language code for Azerbaijani (Azerbaijan)
Language codes for Basque.
Language code for Basque (Spain)
Language codes for Bengali.
Language code for Bengali (Bangladesh)
Language code for Bengali (India)
Language codes for Bulgarian.
Language code for Bulgarian (Bulgaria)
Language codes for Catalan.
Language code for Catalan (Spain)
Language codes for Chinese, Cantonese.
Language code for Chinese, Cantonese (Traditional, Hong Kong)
Language codes for Chinese, Mandarin.
Language code for Chinese, Mandarin (Simplified, China)
Language code for Chinese, Mandarin (Simplified, Hong Kong)
Language code for Chinese, Mandarin (Traditional, Taiwan)
Language codes for Croatian.
Language code for Croatian (Croatia)
Language codes for Czech.
Language code for Czech (Czech Republic)
Language codes for Danish.
Language code for Danish (Denmark)
Language codes for Dutch.
Language code for Dutch (Netherlands)
Language codes for English.
Language code for English (Australia)
Language code for English (Canada)
Language code for English (Ghana)
Language code for English (India)
Language code for English (Ireland)
Language code for English (Kenya)
Language code for English (New Zealand)
Language code for English (Nigeria)
Language code for English (Philippines)
Language code for English (South Africa)
Language code for English (Tanzania)
Language code for English (United Kingdom)
Language code for English (United States)
Language codes for Filipino.
Language code for Filipino (Philippines)
Language codes for Finnish.
Language code for Finnish (Finland)
Language codes for French.
Language code for French (Canada)
Language code for French (France)
Language codes for Galician.
Language code for Galician (Spain)
Language codes for Georgian.
Language code for Georgian (Georgia)
Language codes for German.
Language code for German (Germany)
Language codes for Greek.
Language code for Greek (Greece)
Language codes for Gujarati.
Language code for Gujarati (India)
Language codes for Hebrew.
Language code for Hebrew (Israel)
Language codes for Hindi.
Language code for Hindi (India)
Language codes for Hungarian.
Language code for Hungarian (Hungary)
Language codes for Icelandic.
Language code for Icelandic (Iceland)
Language codes for Indonesian.
Language code for Indonesian (Indonesia)
Language codes for Italian.
Language code for Italian (Italy)
Language codes for Japanese.
Language code for Japanese (Japan)
Language codes for Javanese.
Language code for Javanese (Indonesia)
Language codes for Kannada.
Language code for Kannada (India)
Language codes for Khmer.
Language code for Khmer (Cambodia)
Language codes for Korean.
Language code for Korean (South Korea)
Language codes for Lao.
Language code for Lao (Laos)
Language codes for Latvian.
Language code for Latvian (Latvia)
Language codes for Lithuanian.
Language code for Lithuanian (Lithuania)
Language codes for Malay.
Language code for Malay (Malaysia)
Language codes for Malayalam.
Language code for Malayalam (India)
Language codes for Marathi.
Language code for Marathi (India)
Language codes for Nepali.
Language code for Nepali (Nepal)
Language codes for Norwegian Bokmål.
Language code for Norwegian Bokmål (Norway)
Language codes for Persian.
Language code for Persian (Iran)
Language codes for Polish.
Language code for Polish (Poland)
Language codes for Portuguese.
Language code for Portuguese (Brazil)
Language code for Portuguese (Portugal)
Language codes for Romanian.
Language code for Romanian (Romania)
Language codes for Russian.
Language code for Russian (Russia)
Language codes for Serbian.
Language code for Serbian (Serbia)
Language codes for Sinhala.
Language code for Sinhala (Sri Lanka)
Language codes for Slovak.
Language code for Slovak (Slovakia)
Language codes for Slovenian.
Language code for Slovenian (Slovenia)
Language codes for Spanish.
Language code for Spanish (Argentina)
Language code for Spanish (Bolivia)
Language code for Spanish (Chile)
Language code for Spanish (Colombia)
Language code for Spanish (Costa Rica)
Language code for Spanish (Dominican Republic)
Language code for Spanish (Ecuador)
Language code for Spanish (El Salvador)
Language code for Spanish (Guatemala)
Language code for Spanish (Honduras)
Language code for Spanish (Mexico)
Language code for Spanish (Nicaragua)
Language code for Spanish (Panama)
Language code for Spanish (Paraguay)
Language code for Spanish (Peru)
Language code for Spanish (Puerto Rico)
Language code for Spanish (Spain)
Language code for Spanish (United States)
Language code for Spanish (Uruguay)
Language code for Spanish (Venezuela)
Language codes for Sundanese.
Language code for Sundanese (Indonesia)
Language codes for Swahili.
Language code for Swahili (Kenya)
Language code for Swahili (Tanzania)
Language codes for Swedish.
Language code for Swedish (Sweden)
Language codes for Tamil.
Language code for Tamil (India)
Language code for Tamil (Malaysia)
Language code for Tamil (Singapore)
Language code for Tamil (Sri Lanka)
Language codes for Telugu.
Language code for Telugu (India)
Language codes for Thai.
Language code for Thai (Thailand)
Language codes for Turkish.
Language code for Turkish (Turkey)
Language codes for Ukrainian.
Language code for Ukrainian (Ukraine)
Language codes for Urdu.
Language code for Urdu (India)
Language code for Urdu (Pakistan)
Language codes for Vietnamese.
Language code for Vietnamese (Vietnam)
Language codes for Zulu.
Language code for Zulu (South Africa)
Settings for a .
Get a new instance of the default .
A new instance of the default .
Constructs a new object with default settings.
The filter specifying which RPC s are eligible for retry
for "Idempotent" RPC methods.
The eligible RPC s for retry for "Idempotent" RPC methods are:
The filter specifying which RPC s are eligible for retry
for "NonIdempotent" RPC methods.
There are no RPC s eligible for retry for "NonIdempotent" RPC methods.
"Default" retry backoff for RPC methods.
The "Default" retry backoff for RPC methods.
The "Default" retry backoff for RPC methods is defined as:
- Initial delay: 100 milliseconds
- Maximum delay: 60000 milliseconds
- Delay multiplier: 1.3
"Default" timeout backoff for RPC methods.
The "Default" timeout backoff for RPC methods.
The "Default" timeout backoff for RPC methods is defined as:
- Initial timeout: 1000000 milliseconds
- Timeout multiplier: 1.0
- Maximum timeout: 1000000 milliseconds
for synchronous and asynchronous calls to
SpeechClient.Recognize and SpeechClient.RecognizeAsync.
The default SpeechClient.Recognize and
SpeechClient.RecognizeAsync are:
- Initial retry delay: 100 milliseconds
- Retry delay multiplier: 1.3
- Retry maximum delay: 60000 milliseconds
- Initial timeout: 1000000 milliseconds
- Timeout multiplier: 1.0
- Timeout maximum delay: 1000000 milliseconds
Retry will be attempted on the following response status codes:
Default RPC expiration is 5000000 milliseconds.
for synchronous and asynchronous calls to
SpeechClient.LongRunningRecognize and SpeechClient.LongRunningRecognizeAsync.
The default SpeechClient.LongRunningRecognize and
SpeechClient.LongRunningRecognizeAsync are:
- Initial retry delay: 100 milliseconds
- Retry delay multiplier: 1.3
- Retry maximum delay: 60000 milliseconds
- Initial timeout: 1000000 milliseconds
- Timeout multiplier: 1.0
- Timeout maximum delay: 1000000 milliseconds
Retry will be attempted on the following response status codes:
- No status codes
Default RPC expiration is 5000000 milliseconds.
Long Running Operation settings for calls to SpeechClient.LongRunningRecognize.
Uses default of:
- Initial delay: 20000 milliseconds
- Delay multiplier: 1.5
- Maximum delay: 45000 milliseconds
- Total timeout: 86400000 milliseconds
for calls to SpeechClient.StreamingRecognize.
Default RPC expiration is 5000000 milliseconds.
for calls to
SpeechClient.StreamingRecognize.
The default local send queue size is 100.
Creates a deep clone of this object, with all the same property values.
A deep clone of this object.
Speech client wrapper, for convenient use.
The default endpoint for the Speech service, which is a host of "speech.googleapis.com" and a port of 443.
The default Speech scopes.
The default Speech scopes are:
- "https://www.googleapis.com/auth/cloud-platform"
Asynchronously creates a , applying defaults for all unspecified settings,
and creating a channel connecting to the given endpoint with application default credentials where
necessary. See the example for how to use custom credentials.
This sample shows how to create a client using default credentials:
using Google.Cloud.Speech.V1;
...
// When running on Google Cloud Platform this will use the project Compute Credential.
// Or set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of a JSON
// credential file to use that credential.
SpeechClient client = await SpeechClient.CreateAsync();
This sample shows how to create a client using credentials loaded from a JSON file:
using Google.Cloud.Speech.V1;
using Google.Apis.Auth.OAuth2;
using Grpc.Auth;
using Grpc.Core;
...
GoogleCredential cred = GoogleCredential.FromFile("/path/to/credentials.json");
Channel channel = new Channel(
SpeechClient.DefaultEndpoint.Host, SpeechClient.DefaultEndpoint.Port, cred.ToChannelCredentials());
SpeechClient client = SpeechClient.Create(channel);
...
// Shutdown the channel when it is no longer required.
await channel.ShutdownAsync();
Optional .
Optional .
The task representing the created .
Synchronously creates a , applying defaults for all unspecified settings,
and creating a channel connecting to the given endpoint with application default credentials where
necessary. See the example for how to use custom credentials.
This sample shows how to create a client using default credentials:
using Google.Cloud.Speech.V1;
...
// When running on Google Cloud Platform this will use the project Compute Credential.
// Or set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of a JSON
// credential file to use that credential.
SpeechClient client = SpeechClient.Create();
This sample shows how to create a client using credentials loaded from a JSON file:
using Google.Cloud.Speech.V1;
using Google.Apis.Auth.OAuth2;
using Grpc.Auth;
using Grpc.Core;
...
GoogleCredential cred = GoogleCredential.FromFile("/path/to/credentials.json");
Channel channel = new Channel(
SpeechClient.DefaultEndpoint.Host, SpeechClient.DefaultEndpoint.Port, cred.ToChannelCredentials());
SpeechClient client = SpeechClient.Create(channel);
...
// Shutdown the channel when it is no longer required.
channel.ShutdownAsync().Wait();
Optional .
Optional .
The created .
Creates a which uses the specified channel for remote operations.
The for remote operations. Must not be null.
Optional .
The created .
Creates a which uses the specified call invoker for remote operations.
The for remote operations. Must not be null.
Optional .
The created .
Shuts down any channels automatically created by
and . Channels which weren't automatically
created are not affected.
After calling this method, further calls to
and will create new channels, which could
in turn be shut down by another call to this method.
A task representing the asynchronous shutdown operation.
The underlying gRPC Speech client.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
*Required* Provides information to the recognizer that specifies how to
process the request.
*Required* The audio data to be recognized.
If not null, applies overrides to this RPC call.
A Task containing the RPC response.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
*Required* Provides information to the recognizer that specifies how to
process the request.
*Required* The audio data to be recognized.
A to use for this RPC.
A Task containing the RPC response.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
*Required* Provides information to the recognizer that specifies how to
process the request.
*Required* The audio data to be recognized.
If not null, applies overrides to this RPC call.
The RPC response.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
A Task containing the RPC response.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request object containing all of the parameters for the API call.
A to use for this RPC.
A Task containing the RPC response.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
The RPC response.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
*Required* Provides information to the recognizer that specifies how to
process the request.
*Required* The audio data to be recognized.
If not null, applies overrides to this RPC call.
A Task containing the RPC response.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
*Required* Provides information to the recognizer that specifies how to
process the request.
*Required* The audio data to be recognized.
A to use for this RPC.
A Task containing the RPC response.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
*Required* Provides information to the recognizer that specifies how to
process the request.
*Required* The audio data to be recognized.
If not null, applies overrides to this RPC call.
The RPC response.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
A Task containing the RPC response.
Asynchronously poll an operation once, using an operationName from a previous invocation of LongRunningRecognizeAsync.
The name of a previously invoked operation. Must not be null or empty.
If not null, applies overrides to this RPC call.
A task representing the result of polling the operation.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
The RPC response.
The long-running operations client for LongRunningRecognize.
Poll an operation once, using an operationName from a previous invocation of LongRunningRecognize.
The name of a previously invoked operation. Must not be null or empty.
If not null, applies overrides to this RPC call.
The result of polling the operation.
Performs bidirectional streaming speech recognition: receive results while
sending audio. This method is only available via the gRPC API (not REST).
If not null, applies overrides to this RPC call.
If not null, applies streaming overrides to this RPC call.
The client-server stream.
Bidirectional streaming methods for StreamingRecognize.
Speech client wrapper implementation, for convenient use.
Constructs a client wrapper for the Speech service, with the specified gRPC client and settings.
The underlying gRPC client.
The base used within this client
The underlying gRPC Speech client.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
A Task containing the RPC response.
Performs synchronous speech recognition: receive results after all audio
has been sent and processed.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
The RPC response.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
A Task containing the RPC response.
Performs asynchronous speech recognition: receive results via the
google.longrunning.Operations interface. Returns either an
`Operation.error` or an `Operation.response` which contains
a `LongRunningRecognizeResponse` message.
The request object containing all of the parameters for the API call.
If not null, applies overrides to this RPC call.
The RPC response.
The long-running operations client for LongRunningRecognize.
Performs bidirectional streaming speech recognition: receive results while
sending audio. This method is only available via the gRPC API (not REST).
If not null, applies overrides to this RPC call.
If not null, applies streaming overrides to this RPC call.
The client-server stream.
Construct the bidirectional streaming method for StreamingRecognize.
The service containing this streaming method.
The underlying gRPC duplex streaming call.
The
instance associated with this streaming call.