profile
viewpoint

ac-voice-ai/rasa-audiocodes 6

AudioCodes Voice.AI Gateway integration for Rasa

bivas6/cognitive-services-speech-sdk-js 0

Microsoft Azure Cognitive Services Speech SDK for JavaScript

bivas6/fast-xml-parser 0

Validate XML, Parse XML to JS/JSON and vise versa, or parse XML to Nimn rapidly without C/C++ based libraries and no callback

bivas6/HanochLevin 0

Social networks in Hanoch Levin Plays

bivas6/NLP-assignment2 0

assignment 2 for NLP course

bivas6/NLP-assignment3 0

part 2 of assignment 3 for NLP course

bivas6/pyshark 0

Python wrapper for tshark, allowing python packet parsing using wireshark dissectors

bivas6/rasa 0

💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants

bivas6/rasa-audiocodes 0

AudioCodes Voice.AI Gateway integration for Rasa

issue commentgoogleapis/nodejs-dialogflow

StreamingDetectIntent: dialogflow stop listening after long silence

@munkhuushmgl Why did you close it?

bivas6

comment created time in 2 days

issue commentmicrosoft/cognitive-services-speech-sdk-js

CPU is exploding on load

Hi, we tried the last version, with @orgads change that disables perMessageDeflate, but CPU is still high. processed_no_deflate.txt

bivas6

comment created time in 14 days

Pull request review commentac-voice-ai/rasa-audiocodes

Add support for websocket channel(VAIG Async Bot API).

 def blueprint(     ) -> Blueprint:         ac_webhook = Blueprint("ac_webhook", __name__) +        @ac_webhook.websocket('/conversation/<cid>/websocket')+        async def newClientConnection(request, ws, cid: Text):

new_client_connection

talaviad

comment created time in a month

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentac-voice-ai/rasa-audiocodes

Add support for websocket channel(VAIG Async Bot API).

 class AudiocodesOutput(OutputChannel):     def name(cls) -> Text:         return CHANNEL_NAME -    def __init__(self) -> None:+    def __init__(self, ws, cid) -> None:+        self.ws = ws+        self.cid = cid         self.messages = [] +    async def post_message(self, message) -> None:+        self.messages.append(message)+        if self.ws is not None:+            message_to_user = {"conversation": self.cid, "activities": message}

should be an array?

talaviad

comment created time in a month

Pull request review commentac-voice-ai/rasa-audiocodes

Add support for websocket channel(VAIG Async Bot API).

 class AudiocodesOutput(OutputChannel):     def name(cls) -> Text:         return CHANNEL_NAME -    def __init__(self) -> None:+    def __init__(self, ws, cid) -> None:+        self.ws = ws+        self.cid = cid         self.messages = [] +    async def post_message(self, message) -> None:+        self.messages.append(message)+        if self.ws is not None:+            message_to_user = {"conversation": self.cid, "activities": message}+            await self.ws.send(json.dumps(message_to_user))+     async def send_text_message(         self, recipient_id: Text, text: Text, **kwargs: Any     ) -> None:-        self.messages.append(-            {-                "type": "message",-                "text": text,-                "id": str(uuid.uuid4()),-                "timestamp": datetime.datetime.utcnow().isoformat("T")[:-3] + "Z",-            }-        )+        new_message =   {+                            "type": "message",+                            "text": text,+                            "id": str(uuid.uuid4()),

Add id and timestamp in post_message, then you can remove it from the other functions

talaviad

comment created time in a month

PullRequestReviewEvent

issue commentmicrosoft/cognitive-services-speech-sdk-js

CPU is exploding on load

Hi @glharper, All of the stream functions is on the built in node js readable strean objects, the import is just for typescript notation.

Attaced 2 profile file. processed-azure-ms.txt is profile using cognitive-services-speech-sdk, and processed-azure.txt is the same tests using this package

bivas6

comment created time in a month

push eventbivas6/rasa-audiocodes

Yaakov Bivas

commit sha 877dd6d8ad2dc77908bc6e4634f1018381cc2bfd

Fix intent parameters to be a valid json

view details

push time in a month

issue commentmicrosoft/cognitive-services-speech-sdk-js

CPU is exploding on load

Hi @glharper , thanks. it is better now, but CPU is still high.

compare the above code with 20 requests per second to this code:

// pull in the required packages.
const speechService = require('ac-ms-bing-speech-service');
var fs = require("fs");
var Throttle = require("throttle");
const { default: getToken } = require('./getToken');
const stream = require("stream");

var filename = "<<filename>>.wav"; // 16000 Hz, Mono

function stop(reco) {
  reco.stop();
  delete reco;
}

function startReco(recognizer) {
  return new Promise((res, rej) =>
    recognizer.start().then(() => {
      recognizer.on('speech.phrase', (obj) => {
        switch (obj.RecognitionStatus) {
          case 'Success': {
            const [mainResult] = obj.NBest;
            if (mainResult.Confidence > 0) {
              console.log('recognized')
              break;
            }
          }
          case 'InitialSilenceTimeout':
            break;
          case 'EndOfDictation':
            break;
          default:
            console.log(`STT speech.phrase: RecognitionStatus=${obj.RecognitionStatus}`);
        }
      }); recognizer.on('speech.hypothesis', (obj) => {
        console.log('recognizing')
      });
      recognizer.on('turn.end', (err) => {
        if (err && err.Error)
          console.log(`turn.end: ${err.Error}`);
      });
      recognizer.once('close', stop.bind(recognizer));

      res();
    }
    ).catch((e) => { console.log(JSON.stringify(e)); rej(e) }))
}

const load = (fileName, rate) => {
  let seq = 0;
  setInterval(async () => {
    const mySeq = seq++;
    const start = Date.now();
    try {
      const request = {
        language: 'en-US',
        serviceUrl: 'wss://<<REGION>>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?format=detailed&language=en-US',
        accessToken: await getToken() // returns valid access token
      }
      console.log(JSON.stringify(request));
      const recognizer = new speechService(request);
      await startReco(recognizer);
      const audioStream = fs.createReadStream(fileName).pipe(new Throttle(32000));
      const passThrough = new stream.PassThrough();
      audioStream.pipe(passThrough);
      recognizer.sendStream(passThrough)
        .catch((err) => console.log('terminated', err.toString()));
    } catch (err) {
      console.error(`${mySeq}: error ${err.message}`);
    }
  }, 1000 / rate);
};

load(filename, 20);
bivas6

comment created time in a month

issue closedmicrosoft/cognitive-services-speech-sdk-js

Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200

Hi,

I'm trying to use the sdk as described here, but I'm getting this weird error: Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200

I followed this guide to connect existing Bot with existing Cognitive Service.

code:

const sdk = require('microsoft-cognitiveservices-speech-sdk');
const fs = require('fs');
const Throttle = require('throttle');

async function main() {
  console.log('start')
  const fileName = 'test-message-16k.wav'
  const audioStream = fs.createReadStream(fileName);
  const pushStream = sdk.AudioInputStream.createPushStream();
  const handleData = (buf) => {
    pushStream.write(buf.slice());
    console.log('data')
  };

  audioStream.pipe(new Throttle(16384))
    .on('data', handleData)
    .once('close', () => {
      console.log('close')
      pushStream.close();
      audioStream.removeListener('data', handleData);
    });
  const audioConfig = sdk.AudioConfig.fromStreamInput(pushStream)
  const botConfig = sdk.BotFrameworkConfig.fromSubscription('subscription-key', 'region')
  const reco = new sdk.DialogServiceConnector(botConfig, audioConfig)
  reco.recognizing = (_s, event) => {
    console.log('hypothesis', {
      text: event.result.text
    });
  };

  reco.recognized = (_s, event) => {
    ... <some code here>
    }
  };

  reco.canceled = (_s, event) => {
    ... <some code here>
  };

  // Signals that a new session has started with the speech service
  reco.sessionStarted = (_s, event) => {
    ... <some code here>
  };

  // Signals the end of a session with the speech service.
  reco.sessionStopped = (_s, event) => {
    ... <some code here>
  };

  // Signals that the speech service has started to detect speech.
  reco.speechStartDetected = (_s, event) => {
    ... <some code here>
  };

  // Signals that the speech service has detected that speech has stopped.
  reco.speechEndDetected = (_s, event) => {
    ... <some code here>
  };
  console.log('connect')
  reco.connect()
  reco.listenOnceAsync(undefined,
    (e) => {
      console.error('error')
      console.error(e)
    }
  );
  console.log('end')
}
main().catch((r) => console.error(r))

output is somthing like:

data
data
...
reco
connect
end
data
data
....
error
Unable to contact server. StatusCode: 1006, undefined Reason:  Unexpected server response: 200
data
data

This 'subscription-key', 'region' are in used in other parts of our code to connect azure speech sdk and direct line and it works as expected.

Please advise.

Thanks

cc: @orgads

closed time in a month

bivas6

issue commentmicrosoft/cognitive-services-speech-sdk-js

Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200

Hi @glharper, we have changed the cognitive service account in the direct line speech channel, and now it works.

Thanks for the help!

bivas6

comment created time in a month

issue commentmicrosoft/cognitive-services-speech-sdk-js

CPU is exploding on load

import * as sdk from 'microsoft-cognitiveservices-speech-sdk';
import { Readable } from 'stream';
import getToken from './getToken';
import * as fs from 'fs'
import * as Throttle from 'throttle'
import * as path from 'path';

async function sendStream(stream: Readable) {
  const pushStream = sdk.AudioInputStream.createPushStream();
  const handleData = (buf: Buffer) => pushStream.write(buf.slice().buffer);

  stream
    .on('data', handleData)
    .once('close', () => {
      pushStream.close();
      stream.removeListener('data', handleData);
    });

  const audioConfig = sdk.AudioConfig.fromStreamInput(pushStream);
  const authToken = await getToken(); //getToken create and update valid authrization tokens
  const url = new URL('wss://westus2.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1');
  const speechConfig = sdk.SpeechConfig.fromEndpoint(url, '');
  speechConfig.authorizationToken = authToken;
  const recognizer = new sdk.SpeechRecognizer(speechConfig, audioConfig);
  _registerEvents(recognizer);
  recognizer.startContinuousRecognitionAsync(
    undefined,
    (err) => {
      console.error('err - ' + err);
      recognizer.close();
    }
  );
}

function _registerEvents(reco: sdk.SpeechRecognizer) {
  reco.recognizing = (_s: sdk.Recognizer, event: sdk.SpeechRecognitionEventArgs) => {
  };

  reco.recognized = (_s: sdk.Recognizer, event: sdk.SpeechRecognitionEventArgs) => {
  };

  reco.canceled = (_s: sdk.Recognizer, event: sdk.SpeechRecognitionCanceledEventArgs) => {
  };

  // Signals the end of a session with the speech service.
  reco.sessionStopped = (_s: sdk.Recognizer, event: sdk.SessionEventArgs) => {
  }
}


const load = (fileName: string, rate = 20) => {
  let seq = 0;
  setInterval(() => {
    const mySeq = seq++;
    const start = Date.now();
    try {
      const onEnd = () => console.log(`${mySeq}: done [${Date.now() - start}]`);
      const audioStream = fs.createReadStream(fileName).pipe(new Throttle(32000), { end: false });
      sendStream(audioStream);
      setTimeout(() => {
        onEnd();
      }, 5000);
    } catch (err) {
      console.error(`${mySeq}: ${err}`);
    }
  }, 1000 / rate);
};

const filename = path.resolve(__dirname, '../test-message-16k.wav');
load(filename, 7);
bivas6

comment created time in a month

issue commentmicrosoft/cognitive-services-speech-sdk-js

Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200

@glharper It gives Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200.

Thanks for trying.

bivas6

comment created time in a month

issue commentmicrosoft/cognitive-services-speech-sdk-js

Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200

Thanks @glharper, but it didn't works.

There is no Unable to contact server error, but no any other response as well.

BTW, If this is the way to connect the server, DialogServiceConnector.d.ts needs update:

    /**
     * Starts a connection to the service.
     * Users can optionally call connect() to manually set up a connection in advance, before starting interactions.
     *
     * Note: On return, the connection might not be ready yet. Please subscribe to the Connected event to
     * be notified when the connection is established.
     * @member DialogServiceConnector.prototype.connect
     * @function
     * @public
     */
    connect(): void;

Thanks!

bivas6

comment created time in 2 months

issue openedmicrosoft/cognitive-services-speech-sdk-js

CPU is exploding on load

Hi,

We are trying to run multiple simultaneously startContinuousRecognitionAsync requests, but CPU is exploding.

We ran about 7 requests/second and in about ~3 minutes, the machine is crashed. Note that we are able to make 20 requests/second with this package, and CPU is about 40-50% usage.

chrome inspector cpuprofile file is attached. azure-ms-cpu.zip

cc: @orgads

Thanks!

created time in 2 months

issue openedmicrosoft/cognitive-services-speech-sdk-js

Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200

Hi,

I'm trying to use the sdk as described here, but I'm getting this weird error: Unable to contact server. StatusCode: 1006, undefined Reason: Unexpected server response: 200

I followed this guide to connect existing Bot with existing Cognitive Service.

code:

const sdk = require('microsoft-cognitiveservices-speech-sdk');
const fs = require('fs');
const Throttle = require('throttle');

async function main() {
  console.log('start')
  const fileName = 'test-message-16k.wav'
  const audioStream = fs.createReadStream(fileName);
  const pushStream = sdk.AudioInputStream.createPushStream();
  const handleData = (buf) => {
    pushStream.write(buf.slice());
    console.log('data')
  };

  audioStream.pipe(new Throttle(16384))
    .on('data', handleData)
    .once('close', () => {
      console.log('close')
      pushStream.close();
      audioStream.removeListener('data', handleData);
    });
  const audioConfig = sdk.AudioConfig.fromStreamInput(pushStream)
  const botConfig = sdk.BotFrameworkConfig.fromSubscription('subscription-key', 'region')
  const reco = new sdk.DialogServiceConnector(botConfig, audioConfig)
  reco.recognizing = (_s, event) => {
    console.log('hypothesis', {
      text: event.result.text
    });
  };

  reco.recognized = (_s, event) => {
    ... <some code here>
    }
  };

  reco.canceled = (_s, event) => {
    ... <some code here>
  };

  // Signals that a new session has started with the speech service
  reco.sessionStarted = (_s, event) => {
    ... <some code here>
  };

  // Signals the end of a session with the speech service.
  reco.sessionStopped = (_s, event) => {
    ... <some code here>
  };

  // Signals that the speech service has started to detect speech.
  reco.speechStartDetected = (_s, event) => {
    ... <some code here>
  };

  // Signals that the speech service has detected that speech has stopped.
  reco.speechEndDetected = (_s, event) => {
    ... <some code here>
  };
  console.log('connect')
  reco.connect()
  reco.listenOnceAsync(undefined,
    (e) => {
      console.error('error')
      console.error(e)
    }
  );
  console.log('end')
}
main().catch((r) => console.error(r))

output is somthing like:

data
data
...
reco
connect
end
data
data
....
error
Unable to contact server. StatusCode: 1006, undefined Reason:  Unexpected server response: 200
data
data

This 'subscription-key', 'region' in used in other parts of our code to connect azure speech sdk and direct line and it works as expected.

Please advise.

Thanks

created time in 2 months

push eventbivas6/pyshark

Yaakov Bivas

commit sha d7d197301636f0208674241ec20df7514ce85369

get_interfaces: reject USB interfaces Sometimes there are interfaces that cannot be captured, there name starts with '\\.\'. For example: `8. \\.\USBPcap1 (USBPcap1)`

view details

push time in 2 months

push eventbivas6/pyshark

Yaakov Bivas

commit sha bd264489be7a7aa050c969d02b2171cce683bf4e

Update tshark.py

view details

push time in 2 months

push eventbivas6/pyshark

Yaakov Bivas

commit sha 4a51357337db0e88485a2b15cf60e1b8773357ea

get_interfaces: reject USB interfaces Sometimes there are interfaces that cannot be captured, there name starts with '\\.\'. For example: `8. \\.\USBPcap1 (USBPcap1)`

view details

push time in 2 months

pull request commentKimiNewt/pyshark

Fix wrong get_interfaces implementation

cc: @orgads

bivas6

comment created time in 2 months

PR opened KimiNewt/pyshark

Fix wrong get_interfaces implementation

Output example: unix:

  1. veth44bb9fb
  2. veth00863cf
  3. eth0
  4. veth4ff8cd1
  5. veth2ecda2a
  6. br-cb423efa22d6 ...

Windows:

  1. \Device\NPF_{2D2C765C-35AC-4DB6-7KJd-4211E322ED6D} (Local Area Connection* 8)
  2. \Device\NPF_{9105521D-80EC-4C2A-95K9-C100FCA7EC8B} (Local Area Connection* 7)
  3. \Device\NPF_{AFD838D6-D3D9-47D8-ABE7-B1824198D3EA} (VirtualBox Host-Only Network)
  4. \Device\NPF_{94CACD78-2EDD-4F65-8788-354803DF9BD7} (Ethernet 2)
  5. \Device\NPF_{D40E026A-BE8F-4490-WYZ7-8CD232CFEC43} (Local Area Connection* 6)
  6. \Device\NPF_{A7A2D260-7445-4EB4-840D-F82819AEE871} (Ethernet)
  7. \Device\NPF_Loopback (Adapter for loopback traffic capture) ...

Previous implementation returns [1, 2, 3...] instead of list with interface names

+1 -1

0 comment

1 changed file

pr created time in 2 months

create barnchbivas6/pyshark

branch : my

created branch time in 2 months

fork bivas6/pyshark

Python wrapper for tshark, allowing python packet parsing using wireshark dissectors

fork in 2 months

issue commentnpm/cli

[BUG] <cb() never called!>

Same for me, trying to install microsoft-cognitive-services-speech-sdk.

NPM: 6.14.4 NODE: 12.18.0 OS: WIN 10

skyfall17

comment created time in 3 months

create barnchbivas6/cognitive-services-speech-sdk-js

branch : my

created branch time in 3 months

fork bivas6/cognitive-services-speech-sdk-js

Microsoft Azure Cognitive Services Speech SDK for JavaScript

fork in 3 months

more