Browse Prior Art Database

A method to scrape demonstration information from a video Disclosure Number: IPCOM000253937D
Publication Date: 2018-May-16
Document File: 2 page(s) / 139K

Publishing Venue

The Prior Art Database

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 53% of the total text.

A method to scrape demonstration information from a video Backgroud: 1. When users watch a video with demonstration, it's difficult to get the speaker’s keyboard action, like shortcuts, when the action can’t be displayed in the screen and

the speaker doesn’t declare it. 
 2. When users practice command lines in a demonstration locally, they could only

type command lines word by word manually, which takes much time. 
 3. When users want to check a point in demonstration in the whole video, it takes

time and users' patience 
 4 #1 and #2 are also problems in alive meetings with demonstration, users can't

clearly follow speaker's action, especially when action is in a high speed. 
 Claim Points: Disclosed is an integration of UI and backend application layer implementation: 1. Users can get the speaker’s keyboard action in context, like shotcuts 2. Command lines typed in demostration could be translated into context, where you could directly copy. 3. During the translation, you could choose only save the correct command lines, or together with the return of the command lines. 4. Speakers typed raw data could be showed aside the video, and you could jump to the video frame through click the raw data to reduce the time to locate. Descriptions: The detail process is showed in Figure 1.

Figure 1. Workflow of Backed application layer

1. When recording video or screen sharing in a meeting, software-based keystroke logger runs on speaker’s computer and captures all the keystrokes in a text file where timestamp is...