Browse Prior Art Database

System & Method to detect and display who's speaking in an on-line meeting Disclosure Number: IPCOM000186155D
Original Publication Date: 2009-Aug-11
Included in the Prior Art Database: 2009-Aug-11
Document File: 2 page(s) / 48K

Publishing Venue



Disclosed is a method to detect and display who is speaking in an on-line meeting

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 2



Method to detect and display who '

& Method to detect and display whoMethod to detect and display whoMethod to detect and display who


'''s speaking in an ons speaking in an ons speaking in an on

s speaking in an on -

line meeting

It is highly valuable to know who is speaking in an on-line meeting or conference call. Current systems typically detect audio activity, then use that to trigger the visualization. Typically this involves high-cost telephony integration systems, or full featured VoIP. A very common problem occurs when integrating with telephony systems that lack the same mechanisms for detecting audio activity (ex: when using a customer's conferencing bridge). VoIP simplifies this, and can work in a decentralized system, but makes it hard to integrate a tradition PSTN phone.

A different problem occurs when several people are in the same room attending an on-line meeting or conference call. Typically a single phone is shared, and if that single phone is associated with one user in the meeting/call, then it becomes confusing to remote attendees when others in the room speak.

There are ways to work around this problem, but most come with undesirable barriers. For example, all co-located users in the same room can each use VoIP to join a call, but they will surely suffer from either introducing feedback, be forced to invoke complex echo cancellation, pay the price of adding network bandwidth to the system (even if everyone's speakers are muted etc). This would be a misuse of VoIP in order to achieve the desired visual effect. These sort of work workarounds are not desirable because of the barriers listed.

The concept described in this article is as follows:

For the local speaking state, the local device's microphone is used, but only to detect audio level.

  In one mode of operation, local technology on the device looks for cases where the audio level passes a threshold

      - in that case the local device communicates a speaking state of "speaking" to the server.
- Likewise, when the audio level is below the threshold, the speaking state is "not speaking".

  Each user's speaking state is communicated to a centralized server, with specific data relating to which meeting/call the user is in,

For each meeting/call, the attendee's speaking state is maintained for each...