Browse Prior Art Database

A framework for Sign Language to Speech and Speech to Sign Language translation, based on Recommended System Techniques.

IP.com Disclosure Number: IPCOM000250456D
Publication Date: 2017-Jul-19
Document File: 9 page(s) / 278K

Publishing Venue

The IP.com Prior Art Database

Abstract

A framework for integrating a Sign language to speech and Speech to sign language translation system with Recommender System Techniques. The latter are adopted to address the highly inflected characteristics of sign languages, supplementing the recognition algorithms with contextual information.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 13% of the total text.

TITLE: A framework for Sign Language to Speech and Speech to Sign Language translation,

based on Recommender System Techniques

Abstract:

A framework for integrating a Sign language to speech and Speech to sign language translation system

with Recommender System Techniques. The latter are adopted to address the highly inflected

characteristics of sign languages, supplementing the recognition algorithms with contextual information.

1. Background:

According to the World Health Organization, 360 million people world-wide (5% of the world population)

have some form of disabling hearing loss.

Presently, a person with hearing loss, faces challenges while communicating with people who do not

speak their sign language. In order to access services from restaurants, fast food chains, hospitals,

government offices etc. they usually require the aid of an interpreter, either an acquaintance or someone

appointed by these facilities.

Our solution focuses on easing communication between a sign language speaker and a non sign

language speaker, by offering real time sign language translation.

Also, the 'American with disabilities Act of 1990' encourages business owners to provision appropriate

auxiliary aids and services to all consumers. Our solution will support such need.

On a more technical note, sign languages have been observed to by highly inflected (i.e., that one sign

can be modulated, with very little changes, to carry different meanings according to the context)[1][2].

Users can modulate signs through diverse means: pauses in the movements, changes in direction,

repetitions, etc. Furthermore, modulation can be accompanied by posture and facial expressions that act

as contextual cues to facilitate the transmission of the message. Such a diversity of signs, with hard-to-

dissambiguate candidates, creates a challenge for automating sign language recognition. In our solution

we address this challenge by adopting recommender system techniques, such as (user/item)

collaborative filtering, for enhancing recognition algorithms with contextual information.

Our solution also considers to ease the communication through a Gesture-ahead functionality. By this

means, our solution suggests possible sentences (and sentence completion) to users, based on what has

been uttered thus far in the conversation.

2. Related Work/Prior Art:

In the current section we present related work, first by considering work in the 3 stages of gesture

recognition, then by presenting closely related work on hand gesture recognition. Finally we discuss the

drawbacks of the existing solutions.

The 3 stages of gesture recognition

Work in sign language translation is embedded in the broader context of work in gesture recognition

[3][4]. Within this domain, the recognition problem is divided into stages:

1. gesture acquisition,

2. pre-processing/feature extraction,

3. recognition.

Gesture acquisition stage: Acquisition can be performed through images, video, invasive/wearable motion

sensors (e...