Browse Prior Art Database

Intelligent Line Muting during Conference Call using Cognitive & Contextual Disturbance Techniques

IP.com Disclosure Number: IPCOM000246023D
Publication Date: 2016-Apr-26
Document File: 2 page(s) / 40K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a system and method to utilize contextual disturbance techniques to intelligently mute lines during a conference audio or video call. The core novelty is in the user's process of interacting with the conference system to indicate the type of bothersome noise and the system's response to such a request.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 2

Intelligent Line Muting during Conference Call using Cognitive & Contextual Disturbance Techniques

During a conference call with multiple users, often background noise and the ensuing request for a user to mute the line interrupt the meeting. A method is needed to mute specific lines that are disrupting the conversation based on the context of the request.

Disclosed is a system and method to utilize contextual disturbance techniques to intelligently mute lines during a conference audio or video call.

The purpose of the novel solution is to enable users of a collaboration tool to indicate frustration with a particular type of background noise without interrupting the flow of the ongoing collaboration with other users. This solution does not claim uniqueness around the analysis of sounds and sound mapping to a particular conference line. This approach leverages existing art to identify sounds and offending lines and claims novelty in the user's process of interacting with the conference system to indicate the type of bothersome noise and the system's response to such a request.

The components and process for implementing the novel system and method follow:

1. A teleconferencing system is enabled with the proposed system 2. Each line dials into a call per current processes 3. Call participants interact with the call per current processes (i.e., manually putting a line on mute as needed)

4. The system leverages existing art to analyze information about the noises on each line, determining:

A. Volume of the noise

B. Type of noise (e.g., baby crying, dog barking, bird chirping, coworkers talking, coffee shop, someone eating, someone typing, heavy breathing, dishwashing, ambulance sirens, etc.)

C. Voice identification (e.g., background conversation would be two voices that were not the conference participants or two voices coming from the same line)

5. The system analyzes the noise information from step 4 and compares it to known noises to deep tag the conference line with that particular line. This could be done:

A. At all times, to deep tag the content for future requests for that type of noise to be muted to understand likelihood of the user being the potential culprit

    B. Only when the user unmutes the associated line 6. The background noise identified in step 4 becomes overwhelming, and the moderator or another participant desires that whoever is making the noise eit...