Displaying Google Cloud Speech-to-Text










0















I'm trying to use Google Cloud Speech-to-Text and so far I've got the python transcribe_streaming_mic code working and it's outputting a live speech transcription into my terminal but how to I get it to output that text live to a website text box like the example on their front page?:



I've looked through the documentation for some example code of this but unless I've been blind and not seen it I cant find any website output example code.



Thank you!










share|improve this question


























    0















    I'm trying to use Google Cloud Speech-to-Text and so far I've got the python transcribe_streaming_mic code working and it's outputting a live speech transcription into my terminal but how to I get it to output that text live to a website text box like the example on their front page?:



    I've looked through the documentation for some example code of this but unless I've been blind and not seen it I cant find any website output example code.



    Thank you!










    share|improve this question
























      0












      0








      0








      I'm trying to use Google Cloud Speech-to-Text and so far I've got the python transcribe_streaming_mic code working and it's outputting a live speech transcription into my terminal but how to I get it to output that text live to a website text box like the example on their front page?:



      I've looked through the documentation for some example code of this but unless I've been blind and not seen it I cant find any website output example code.



      Thank you!










      share|improve this question














      I'm trying to use Google Cloud Speech-to-Text and so far I've got the python transcribe_streaming_mic code working and it's outputting a live speech transcription into my terminal but how to I get it to output that text live to a website text box like the example on their front page?:



      I've looked through the documentation for some example code of this but unless I've been blind and not seen it I cant find any website output example code.



      Thank you!







      javascript python google-cloud-platform audio-streaming speech-to-text






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 13 '18 at 21:53









      user3077842user3077842

      1115




      1115






















          2 Answers
          2






          active

          oldest

          votes


















          0














          The demo featured on Google's Speech-to-Text landing page:



          Speech-to-Text



          uses some JavaScript to handle the uploading of audio files and live recording in order to show off the API:



          <div class="l-showcase">
          <div class="text-center">
          <p class="text-title">Convert your speech to text right now</p>
          <p class="text-body">Select a language and click "Start Now" to begin recording</p>
          </div>
          <!-- DEMO -->
          <div
          id="streaming_demo_section"
          data-embed="sp-app"
          data-force-polling="true"
          data-polyfill-url="https://www.gstatic.com/external_hosted/polymer/v2/webcomponents-lite.js"
          data-url="https://www.gstatic.com/cloud-site-ux/speech/speech.min.html">
          </div>
          </div>


          Google provides some examples for how to record audio from a browser user in their Web Fundamentals document: Recording Audio from the User.



          You would have to 1) record the user's audio, 2) post the audio to the Speech-To-Text API, and 3) display the response back to the user's browser.






          share|improve this answer






























            0














            For the Python server part, you can follow this code.
            In the client side you have to send audio stream to the server through websocket connection.



            For testing the Python server, you can use this code



            import asyncio
            import websockets
            import json
            import threading
            from six.moves import queue
            from google.cloud import speech
            from google.cloud.speech import types


            IP = '0.0.0.0'
            PORT = 8000

            class Transcoder(object):
            """
            Converts audio chunks to text
            """
            def __init__(self, encoding, rate, language):
            self.buff = queue.Queue()
            self.encoding = encoding
            self.language = language
            self.rate = rate
            self.closed = True
            self.transcript = None

            def start(self):
            """Start up streaming speech call"""
            threading.Thread(target=self.process).start()

            def response_loop(self, responses):
            """
            Pick up the final result of Speech to text conversion
            """
            for response in responses:
            if not response.results:
            continue
            result = response.results[0]
            if not result.alternatives:
            continue
            transcript = result.alternatives[0].transcript
            if result.is_final:
            self.transcript = transcript

            def process(self):
            """
            Audio stream recognition and result parsing
            """
            #You can add speech contexts for better recognition
            cap_speech_context = types.SpeechContext(phrases=["Add your phrases here"])
            client = speech.SpeechClient()
            config = types.RecognitionConfig(
            encoding=self.encoding,
            sample_rate_hertz=self.rate,
            language_code=self.language,
            speech_contexts=[cap_speech_context,],
            model='command_and_search'
            )
            streaming_config = types.StreamingRecognitionConfig(
            config=config,
            interim_results=False,
            single_utterance=False)
            audio_generator = self.stream_generator()
            requests = (types.StreamingRecognizeRequest(audio_content=content)
            for content in audio_generator)

            responses = client.streaming_recognize(streaming_config, requests)
            try:
            self.response_loop(responses)
            except:
            self.start()

            def stream_generator(self):
            while not self.closed:
            chunk = self.buff.get()
            if chunk is None:
            return
            data = [chunk]
            while True:
            try:
            chunk = self.buff.get(block=False)
            if chunk is None:
            return
            data.append(chunk)
            except queue.Empty:
            break
            yield b''.join(data)

            def write(self, data):
            """
            Writes data to the buffer
            """
            self.buff.put(data)


            async def audio_processor(websocket, path):
            """
            Collects audio from the stream, writes it to buffer and return the output of Google speech to text
            """
            config = await websocket.recv()
            if not isinstance(config, str):
            print("ERROR, no config")
            return
            config = json.loads(config)
            transcoder = Transcoder(
            encoding=config["format"],
            rate=config["rate"],
            language=config["language"]
            )
            transcoder.start()
            while True:
            try:
            data = await websocket.recv()
            except websockets.ConnectionClosed:
            print("Connection closed")
            break
            transcoder.write(data)
            transcoder.closed = False
            if transcoder.transcript:
            print(transcoder.transcript)
            await websocket.send(transcoder.transcript)
            transcoder.transcript = None

            start_server = websockets.serve(audio_processor, IP, PORT)
            asyncio.get_event_loop().run_until_complete(start_server)
            asyncio.get_event_loop().run_forever()





            share|improve this answer

























            • Thank you for your reply! I've followed your README but when I enter ws://0.0.0.0:8000/ into my browser it get the error 'This site can’t be reached: ERR_DISALLOWED_URL_SCHEME'. Do you know how I get the webpage to display on that address after running websocket_server.py and websocket_client.py?

              – user3077842
              Nov 15 '18 at 9:47











            • You don't have to take it in the browser, the transcribed audio message will be printed in the terminal in which you are running websocket_server.py (The same message is also passed back to the client)

              – Dawn T Cherian
              Nov 15 '18 at 12:25











            • But my question said I'd already got code that does that, how do I get it to output live onto a website as the audio is being spoken?

              – user3077842
              Nov 15 '18 at 15:21










            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53290080%2fdisplaying-google-cloud-speech-to-text%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            The demo featured on Google's Speech-to-Text landing page:



            Speech-to-Text



            uses some JavaScript to handle the uploading of audio files and live recording in order to show off the API:



            <div class="l-showcase">
            <div class="text-center">
            <p class="text-title">Convert your speech to text right now</p>
            <p class="text-body">Select a language and click "Start Now" to begin recording</p>
            </div>
            <!-- DEMO -->
            <div
            id="streaming_demo_section"
            data-embed="sp-app"
            data-force-polling="true"
            data-polyfill-url="https://www.gstatic.com/external_hosted/polymer/v2/webcomponents-lite.js"
            data-url="https://www.gstatic.com/cloud-site-ux/speech/speech.min.html">
            </div>
            </div>


            Google provides some examples for how to record audio from a browser user in their Web Fundamentals document: Recording Audio from the User.



            You would have to 1) record the user's audio, 2) post the audio to the Speech-To-Text API, and 3) display the response back to the user's browser.






            share|improve this answer



























              0














              The demo featured on Google's Speech-to-Text landing page:



              Speech-to-Text



              uses some JavaScript to handle the uploading of audio files and live recording in order to show off the API:



              <div class="l-showcase">
              <div class="text-center">
              <p class="text-title">Convert your speech to text right now</p>
              <p class="text-body">Select a language and click "Start Now" to begin recording</p>
              </div>
              <!-- DEMO -->
              <div
              id="streaming_demo_section"
              data-embed="sp-app"
              data-force-polling="true"
              data-polyfill-url="https://www.gstatic.com/external_hosted/polymer/v2/webcomponents-lite.js"
              data-url="https://www.gstatic.com/cloud-site-ux/speech/speech.min.html">
              </div>
              </div>


              Google provides some examples for how to record audio from a browser user in their Web Fundamentals document: Recording Audio from the User.



              You would have to 1) record the user's audio, 2) post the audio to the Speech-To-Text API, and 3) display the response back to the user's browser.






              share|improve this answer

























                0












                0








                0







                The demo featured on Google's Speech-to-Text landing page:



                Speech-to-Text



                uses some JavaScript to handle the uploading of audio files and live recording in order to show off the API:



                <div class="l-showcase">
                <div class="text-center">
                <p class="text-title">Convert your speech to text right now</p>
                <p class="text-body">Select a language and click "Start Now" to begin recording</p>
                </div>
                <!-- DEMO -->
                <div
                id="streaming_demo_section"
                data-embed="sp-app"
                data-force-polling="true"
                data-polyfill-url="https://www.gstatic.com/external_hosted/polymer/v2/webcomponents-lite.js"
                data-url="https://www.gstatic.com/cloud-site-ux/speech/speech.min.html">
                </div>
                </div>


                Google provides some examples for how to record audio from a browser user in their Web Fundamentals document: Recording Audio from the User.



                You would have to 1) record the user's audio, 2) post the audio to the Speech-To-Text API, and 3) display the response back to the user's browser.






                share|improve this answer













                The demo featured on Google's Speech-to-Text landing page:



                Speech-to-Text



                uses some JavaScript to handle the uploading of audio files and live recording in order to show off the API:



                <div class="l-showcase">
                <div class="text-center">
                <p class="text-title">Convert your speech to text right now</p>
                <p class="text-body">Select a language and click "Start Now" to begin recording</p>
                </div>
                <!-- DEMO -->
                <div
                id="streaming_demo_section"
                data-embed="sp-app"
                data-force-polling="true"
                data-polyfill-url="https://www.gstatic.com/external_hosted/polymer/v2/webcomponents-lite.js"
                data-url="https://www.gstatic.com/cloud-site-ux/speech/speech.min.html">
                </div>
                </div>


                Google provides some examples for how to record audio from a browser user in their Web Fundamentals document: Recording Audio from the User.



                You would have to 1) record the user's audio, 2) post the audio to the Speech-To-Text API, and 3) display the response back to the user's browser.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 14 '18 at 4:39









                lukwamlukwam

                334210




                334210























                    0














                    For the Python server part, you can follow this code.
                    In the client side you have to send audio stream to the server through websocket connection.



                    For testing the Python server, you can use this code



                    import asyncio
                    import websockets
                    import json
                    import threading
                    from six.moves import queue
                    from google.cloud import speech
                    from google.cloud.speech import types


                    IP = '0.0.0.0'
                    PORT = 8000

                    class Transcoder(object):
                    """
                    Converts audio chunks to text
                    """
                    def __init__(self, encoding, rate, language):
                    self.buff = queue.Queue()
                    self.encoding = encoding
                    self.language = language
                    self.rate = rate
                    self.closed = True
                    self.transcript = None

                    def start(self):
                    """Start up streaming speech call"""
                    threading.Thread(target=self.process).start()

                    def response_loop(self, responses):
                    """
                    Pick up the final result of Speech to text conversion
                    """
                    for response in responses:
                    if not response.results:
                    continue
                    result = response.results[0]
                    if not result.alternatives:
                    continue
                    transcript = result.alternatives[0].transcript
                    if result.is_final:
                    self.transcript = transcript

                    def process(self):
                    """
                    Audio stream recognition and result parsing
                    """
                    #You can add speech contexts for better recognition
                    cap_speech_context = types.SpeechContext(phrases=["Add your phrases here"])
                    client = speech.SpeechClient()
                    config = types.RecognitionConfig(
                    encoding=self.encoding,
                    sample_rate_hertz=self.rate,
                    language_code=self.language,
                    speech_contexts=[cap_speech_context,],
                    model='command_and_search'
                    )
                    streaming_config = types.StreamingRecognitionConfig(
                    config=config,
                    interim_results=False,
                    single_utterance=False)
                    audio_generator = self.stream_generator()
                    requests = (types.StreamingRecognizeRequest(audio_content=content)
                    for content in audio_generator)

                    responses = client.streaming_recognize(streaming_config, requests)
                    try:
                    self.response_loop(responses)
                    except:
                    self.start()

                    def stream_generator(self):
                    while not self.closed:
                    chunk = self.buff.get()
                    if chunk is None:
                    return
                    data = [chunk]
                    while True:
                    try:
                    chunk = self.buff.get(block=False)
                    if chunk is None:
                    return
                    data.append(chunk)
                    except queue.Empty:
                    break
                    yield b''.join(data)

                    def write(self, data):
                    """
                    Writes data to the buffer
                    """
                    self.buff.put(data)


                    async def audio_processor(websocket, path):
                    """
                    Collects audio from the stream, writes it to buffer and return the output of Google speech to text
                    """
                    config = await websocket.recv()
                    if not isinstance(config, str):
                    print("ERROR, no config")
                    return
                    config = json.loads(config)
                    transcoder = Transcoder(
                    encoding=config["format"],
                    rate=config["rate"],
                    language=config["language"]
                    )
                    transcoder.start()
                    while True:
                    try:
                    data = await websocket.recv()
                    except websockets.ConnectionClosed:
                    print("Connection closed")
                    break
                    transcoder.write(data)
                    transcoder.closed = False
                    if transcoder.transcript:
                    print(transcoder.transcript)
                    await websocket.send(transcoder.transcript)
                    transcoder.transcript = None

                    start_server = websockets.serve(audio_processor, IP, PORT)
                    asyncio.get_event_loop().run_until_complete(start_server)
                    asyncio.get_event_loop().run_forever()





                    share|improve this answer

























                    • Thank you for your reply! I've followed your README but when I enter ws://0.0.0.0:8000/ into my browser it get the error 'This site can’t be reached: ERR_DISALLOWED_URL_SCHEME'. Do you know how I get the webpage to display on that address after running websocket_server.py and websocket_client.py?

                      – user3077842
                      Nov 15 '18 at 9:47











                    • You don't have to take it in the browser, the transcribed audio message will be printed in the terminal in which you are running websocket_server.py (The same message is also passed back to the client)

                      – Dawn T Cherian
                      Nov 15 '18 at 12:25











                    • But my question said I'd already got code that does that, how do I get it to output live onto a website as the audio is being spoken?

                      – user3077842
                      Nov 15 '18 at 15:21















                    0














                    For the Python server part, you can follow this code.
                    In the client side you have to send audio stream to the server through websocket connection.



                    For testing the Python server, you can use this code



                    import asyncio
                    import websockets
                    import json
                    import threading
                    from six.moves import queue
                    from google.cloud import speech
                    from google.cloud.speech import types


                    IP = '0.0.0.0'
                    PORT = 8000

                    class Transcoder(object):
                    """
                    Converts audio chunks to text
                    """
                    def __init__(self, encoding, rate, language):
                    self.buff = queue.Queue()
                    self.encoding = encoding
                    self.language = language
                    self.rate = rate
                    self.closed = True
                    self.transcript = None

                    def start(self):
                    """Start up streaming speech call"""
                    threading.Thread(target=self.process).start()

                    def response_loop(self, responses):
                    """
                    Pick up the final result of Speech to text conversion
                    """
                    for response in responses:
                    if not response.results:
                    continue
                    result = response.results[0]
                    if not result.alternatives:
                    continue
                    transcript = result.alternatives[0].transcript
                    if result.is_final:
                    self.transcript = transcript

                    def process(self):
                    """
                    Audio stream recognition and result parsing
                    """
                    #You can add speech contexts for better recognition
                    cap_speech_context = types.SpeechContext(phrases=["Add your phrases here"])
                    client = speech.SpeechClient()
                    config = types.RecognitionConfig(
                    encoding=self.encoding,
                    sample_rate_hertz=self.rate,
                    language_code=self.language,
                    speech_contexts=[cap_speech_context,],
                    model='command_and_search'
                    )
                    streaming_config = types.StreamingRecognitionConfig(
                    config=config,
                    interim_results=False,
                    single_utterance=False)
                    audio_generator = self.stream_generator()
                    requests = (types.StreamingRecognizeRequest(audio_content=content)
                    for content in audio_generator)

                    responses = client.streaming_recognize(streaming_config, requests)
                    try:
                    self.response_loop(responses)
                    except:
                    self.start()

                    def stream_generator(self):
                    while not self.closed:
                    chunk = self.buff.get()
                    if chunk is None:
                    return
                    data = [chunk]
                    while True:
                    try:
                    chunk = self.buff.get(block=False)
                    if chunk is None:
                    return
                    data.append(chunk)
                    except queue.Empty:
                    break
                    yield b''.join(data)

                    def write(self, data):
                    """
                    Writes data to the buffer
                    """
                    self.buff.put(data)


                    async def audio_processor(websocket, path):
                    """
                    Collects audio from the stream, writes it to buffer and return the output of Google speech to text
                    """
                    config = await websocket.recv()
                    if not isinstance(config, str):
                    print("ERROR, no config")
                    return
                    config = json.loads(config)
                    transcoder = Transcoder(
                    encoding=config["format"],
                    rate=config["rate"],
                    language=config["language"]
                    )
                    transcoder.start()
                    while True:
                    try:
                    data = await websocket.recv()
                    except websockets.ConnectionClosed:
                    print("Connection closed")
                    break
                    transcoder.write(data)
                    transcoder.closed = False
                    if transcoder.transcript:
                    print(transcoder.transcript)
                    await websocket.send(transcoder.transcript)
                    transcoder.transcript = None

                    start_server = websockets.serve(audio_processor, IP, PORT)
                    asyncio.get_event_loop().run_until_complete(start_server)
                    asyncio.get_event_loop().run_forever()





                    share|improve this answer

























                    • Thank you for your reply! I've followed your README but when I enter ws://0.0.0.0:8000/ into my browser it get the error 'This site can’t be reached: ERR_DISALLOWED_URL_SCHEME'. Do you know how I get the webpage to display on that address after running websocket_server.py and websocket_client.py?

                      – user3077842
                      Nov 15 '18 at 9:47











                    • You don't have to take it in the browser, the transcribed audio message will be printed in the terminal in which you are running websocket_server.py (The same message is also passed back to the client)

                      – Dawn T Cherian
                      Nov 15 '18 at 12:25











                    • But my question said I'd already got code that does that, how do I get it to output live onto a website as the audio is being spoken?

                      – user3077842
                      Nov 15 '18 at 15:21













                    0












                    0








                    0







                    For the Python server part, you can follow this code.
                    In the client side you have to send audio stream to the server through websocket connection.



                    For testing the Python server, you can use this code



                    import asyncio
                    import websockets
                    import json
                    import threading
                    from six.moves import queue
                    from google.cloud import speech
                    from google.cloud.speech import types


                    IP = '0.0.0.0'
                    PORT = 8000

                    class Transcoder(object):
                    """
                    Converts audio chunks to text
                    """
                    def __init__(self, encoding, rate, language):
                    self.buff = queue.Queue()
                    self.encoding = encoding
                    self.language = language
                    self.rate = rate
                    self.closed = True
                    self.transcript = None

                    def start(self):
                    """Start up streaming speech call"""
                    threading.Thread(target=self.process).start()

                    def response_loop(self, responses):
                    """
                    Pick up the final result of Speech to text conversion
                    """
                    for response in responses:
                    if not response.results:
                    continue
                    result = response.results[0]
                    if not result.alternatives:
                    continue
                    transcript = result.alternatives[0].transcript
                    if result.is_final:
                    self.transcript = transcript

                    def process(self):
                    """
                    Audio stream recognition and result parsing
                    """
                    #You can add speech contexts for better recognition
                    cap_speech_context = types.SpeechContext(phrases=["Add your phrases here"])
                    client = speech.SpeechClient()
                    config = types.RecognitionConfig(
                    encoding=self.encoding,
                    sample_rate_hertz=self.rate,
                    language_code=self.language,
                    speech_contexts=[cap_speech_context,],
                    model='command_and_search'
                    )
                    streaming_config = types.StreamingRecognitionConfig(
                    config=config,
                    interim_results=False,
                    single_utterance=False)
                    audio_generator = self.stream_generator()
                    requests = (types.StreamingRecognizeRequest(audio_content=content)
                    for content in audio_generator)

                    responses = client.streaming_recognize(streaming_config, requests)
                    try:
                    self.response_loop(responses)
                    except:
                    self.start()

                    def stream_generator(self):
                    while not self.closed:
                    chunk = self.buff.get()
                    if chunk is None:
                    return
                    data = [chunk]
                    while True:
                    try:
                    chunk = self.buff.get(block=False)
                    if chunk is None:
                    return
                    data.append(chunk)
                    except queue.Empty:
                    break
                    yield b''.join(data)

                    def write(self, data):
                    """
                    Writes data to the buffer
                    """
                    self.buff.put(data)


                    async def audio_processor(websocket, path):
                    """
                    Collects audio from the stream, writes it to buffer and return the output of Google speech to text
                    """
                    config = await websocket.recv()
                    if not isinstance(config, str):
                    print("ERROR, no config")
                    return
                    config = json.loads(config)
                    transcoder = Transcoder(
                    encoding=config["format"],
                    rate=config["rate"],
                    language=config["language"]
                    )
                    transcoder.start()
                    while True:
                    try:
                    data = await websocket.recv()
                    except websockets.ConnectionClosed:
                    print("Connection closed")
                    break
                    transcoder.write(data)
                    transcoder.closed = False
                    if transcoder.transcript:
                    print(transcoder.transcript)
                    await websocket.send(transcoder.transcript)
                    transcoder.transcript = None

                    start_server = websockets.serve(audio_processor, IP, PORT)
                    asyncio.get_event_loop().run_until_complete(start_server)
                    asyncio.get_event_loop().run_forever()





                    share|improve this answer















                    For the Python server part, you can follow this code.
                    In the client side you have to send audio stream to the server through websocket connection.



                    For testing the Python server, you can use this code



                    import asyncio
                    import websockets
                    import json
                    import threading
                    from six.moves import queue
                    from google.cloud import speech
                    from google.cloud.speech import types


                    IP = '0.0.0.0'
                    PORT = 8000

                    class Transcoder(object):
                    """
                    Converts audio chunks to text
                    """
                    def __init__(self, encoding, rate, language):
                    self.buff = queue.Queue()
                    self.encoding = encoding
                    self.language = language
                    self.rate = rate
                    self.closed = True
                    self.transcript = None

                    def start(self):
                    """Start up streaming speech call"""
                    threading.Thread(target=self.process).start()

                    def response_loop(self, responses):
                    """
                    Pick up the final result of Speech to text conversion
                    """
                    for response in responses:
                    if not response.results:
                    continue
                    result = response.results[0]
                    if not result.alternatives:
                    continue
                    transcript = result.alternatives[0].transcript
                    if result.is_final:
                    self.transcript = transcript

                    def process(self):
                    """
                    Audio stream recognition and result parsing
                    """
                    #You can add speech contexts for better recognition
                    cap_speech_context = types.SpeechContext(phrases=["Add your phrases here"])
                    client = speech.SpeechClient()
                    config = types.RecognitionConfig(
                    encoding=self.encoding,
                    sample_rate_hertz=self.rate,
                    language_code=self.language,
                    speech_contexts=[cap_speech_context,],
                    model='command_and_search'
                    )
                    streaming_config = types.StreamingRecognitionConfig(
                    config=config,
                    interim_results=False,
                    single_utterance=False)
                    audio_generator = self.stream_generator()
                    requests = (types.StreamingRecognizeRequest(audio_content=content)
                    for content in audio_generator)

                    responses = client.streaming_recognize(streaming_config, requests)
                    try:
                    self.response_loop(responses)
                    except:
                    self.start()

                    def stream_generator(self):
                    while not self.closed:
                    chunk = self.buff.get()
                    if chunk is None:
                    return
                    data = [chunk]
                    while True:
                    try:
                    chunk = self.buff.get(block=False)
                    if chunk is None:
                    return
                    data.append(chunk)
                    except queue.Empty:
                    break
                    yield b''.join(data)

                    def write(self, data):
                    """
                    Writes data to the buffer
                    """
                    self.buff.put(data)


                    async def audio_processor(websocket, path):
                    """
                    Collects audio from the stream, writes it to buffer and return the output of Google speech to text
                    """
                    config = await websocket.recv()
                    if not isinstance(config, str):
                    print("ERROR, no config")
                    return
                    config = json.loads(config)
                    transcoder = Transcoder(
                    encoding=config["format"],
                    rate=config["rate"],
                    language=config["language"]
                    )
                    transcoder.start()
                    while True:
                    try:
                    data = await websocket.recv()
                    except websockets.ConnectionClosed:
                    print("Connection closed")
                    break
                    transcoder.write(data)
                    transcoder.closed = False
                    if transcoder.transcript:
                    print(transcoder.transcript)
                    await websocket.send(transcoder.transcript)
                    transcoder.transcript = None

                    start_server = websockets.serve(audio_processor, IP, PORT)
                    asyncio.get_event_loop().run_until_complete(start_server)
                    asyncio.get_event_loop().run_forever()






                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Nov 15 '18 at 7:01

























                    answered Nov 15 '18 at 6:56









                    Dawn T CherianDawn T Cherian

                    1,91621224




                    1,91621224












                    • Thank you for your reply! I've followed your README but when I enter ws://0.0.0.0:8000/ into my browser it get the error 'This site can’t be reached: ERR_DISALLOWED_URL_SCHEME'. Do you know how I get the webpage to display on that address after running websocket_server.py and websocket_client.py?

                      – user3077842
                      Nov 15 '18 at 9:47











                    • You don't have to take it in the browser, the transcribed audio message will be printed in the terminal in which you are running websocket_server.py (The same message is also passed back to the client)

                      – Dawn T Cherian
                      Nov 15 '18 at 12:25











                    • But my question said I'd already got code that does that, how do I get it to output live onto a website as the audio is being spoken?

                      – user3077842
                      Nov 15 '18 at 15:21

















                    • Thank you for your reply! I've followed your README but when I enter ws://0.0.0.0:8000/ into my browser it get the error 'This site can’t be reached: ERR_DISALLOWED_URL_SCHEME'. Do you know how I get the webpage to display on that address after running websocket_server.py and websocket_client.py?

                      – user3077842
                      Nov 15 '18 at 9:47











                    • You don't have to take it in the browser, the transcribed audio message will be printed in the terminal in which you are running websocket_server.py (The same message is also passed back to the client)

                      – Dawn T Cherian
                      Nov 15 '18 at 12:25











                    • But my question said I'd already got code that does that, how do I get it to output live onto a website as the audio is being spoken?

                      – user3077842
                      Nov 15 '18 at 15:21
















                    Thank you for your reply! I've followed your README but when I enter ws://0.0.0.0:8000/ into my browser it get the error 'This site can’t be reached: ERR_DISALLOWED_URL_SCHEME'. Do you know how I get the webpage to display on that address after running websocket_server.py and websocket_client.py?

                    – user3077842
                    Nov 15 '18 at 9:47





                    Thank you for your reply! I've followed your README but when I enter ws://0.0.0.0:8000/ into my browser it get the error 'This site can’t be reached: ERR_DISALLOWED_URL_SCHEME'. Do you know how I get the webpage to display on that address after running websocket_server.py and websocket_client.py?

                    – user3077842
                    Nov 15 '18 at 9:47













                    You don't have to take it in the browser, the transcribed audio message will be printed in the terminal in which you are running websocket_server.py (The same message is also passed back to the client)

                    – Dawn T Cherian
                    Nov 15 '18 at 12:25





                    You don't have to take it in the browser, the transcribed audio message will be printed in the terminal in which you are running websocket_server.py (The same message is also passed back to the client)

                    – Dawn T Cherian
                    Nov 15 '18 at 12:25













                    But my question said I'd already got code that does that, how do I get it to output live onto a website as the audio is being spoken?

                    – user3077842
                    Nov 15 '18 at 15:21





                    But my question said I'd already got code that does that, how do I get it to output live onto a website as the audio is being spoken?

                    – user3077842
                    Nov 15 '18 at 15:21

















                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53290080%2fdisplaying-google-cloud-speech-to-text%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Use pre created SQLite database for Android project in kotlin

                    Darth Vader #20

                    Ondo