Browse Prior Art Database

Taking notes when watching a video on mobile devices

IP.com Disclosure Number: IPCOM000248687D
Publication Date: 2016-Dec-27
Document File: 4 page(s) / 83K

Publishing Venue

The IP.com Prior Art Database

Abstract

This disclosure claims a method which enables note-taking when a user is watching a video on mobile devices. The major claim points are: - Triggering the note-taking function without having to pause video - Automatic background processing texts, figures, and video clips - Generating organized and readable notes

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 52% of the total text.

1

Taking notes when watching a video on mobile devices

People often watch some videos like lectures or open courses by using mobile devices. These kinds of videos are knowledgeable so people would like to take notes for future learning. However, it is not easy to taking notes when watching a video on a mobile device because of the following reasons:

The screen of a mobile device can be too small to contain both the video frame and a notebook. The typing experience is not as convenient as that of a computer with keyboard. The one-hand operation is needed in some cases: for example, when watch a video of a lecture on a crowded subway.

Compared with the existing solutions, this disclosure has the following advantages:

Non-interruption watching experience  Simple user experience that is suitable for mobile devices and one-hand operation Accurate notes based on both the semantic analysis of AI and the user’s logics

Detailed description:

1. Triggering the note-taking function without having to pause video

By hand or by facial expression

By hand - on front When watching the video and seeing a point of interest, users can drag backward on the screen to capture a semantic block before the current timestamp or drag forward to capture a semantic block around the current timestamp, or zigzag to capture both the semantic block before and after the current timestamp. Users can draw to wrap the text or graphic of interest. The text and graphic will be captured and added to background for processing.

By hand - on back When users are using one hand in environment like metro, users can use the back of the phone to perform the same operations.

2

By hand - around the phone screen Users can make gestures around the phone screen to trigger the function as well. The camera captures and parses the gestures such as dragging. Wearable devices can also aid to recognize the gesture.

By facial expression The note taking mode can also be triggered by identifying the facial expression of use in real time. When users watch the video and nod, the mode captures the video for x +/- 1 seconds based on the semantic analysis. The camera is opened to monitor the facial expression...