Browse Prior Art Database

Displaying large models in augmented reality Disclosure Number: IPCOM000236647D
Publication Date: 2014-May-07
Document File: 4 page(s) / 121K

Publishing Venue

The Prior Art Database


One of the current limitations of Augmented Reality (AR) technology is that when scanning in QR codes to display models the device must keep the QR code in view to display it. This is necessary because the device requires a point of origin to display the model in relation to. This makes it difficult to view large models using this type of technology as the device cannot be moved away from the QR code to view the whole scope of the object. This article describes a method which enables large models to be viewed in this way.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 51% of the total text.

Page 01 of 4

Displaying large models in augmented reality

When scanning a QR code in order to view an Augmented Reality (AR) overlay, the QR code is then unable to be moved away from the devices view point otherwise the model is removed from view. This is a drawback for origin-reliant AR technology because it limits the size of models that can be viewed by a device - if the model is too big to be shown on the devices screen then it needs to be scaled down. If the model is not scaled to the screen size of the device it will not be able to pan around the model because when the QR code goes out of view of the camera the model will no longer have a point of origin to be displayed in relation to.

    The solution to this problem is to map the device's camera frustum and all QR codes it sees in virtual space, allowing algorithms that are commonly used in computer games programming to be applied. When the camera detects a QR code, a marker should be placed in virtual space so that the camera has a reference to

where the model should be drawn. When the QR code is mapped in this way it allows for the commonly used algorithms to be applied that will detect if the model can be seen by the camera. Using an accelerometer and gyroscope available within the camera, the QR code's absolute position in space can be calculated relative to the position of the camera.

    With the camera and QR code marker known as positions relative to each other, it is possible to calculate if the AR model intersects the camera using bounding object tests. Taking the position of the QR code marker and the dimensions of the model it represents, a bounding object test can be applied against the frustum of the camera - if part of the model is visible by the camera then it will be rendered to the screen, improving on current technology which either shows all or none of the model.

    This idea solves the problem of positioning the camera and QR codes in 3D space by using three stages:

1. Generating and maintaining the virtual camera, listed under the section 'Camera'.

2. Calculating the distance of the QR codes from the camera and placing them in virtual space, listed under the section 'QR codes'.

3. The rendering of the models to the device's screen, listed under the section 'Rendering'.

The combination of data from these stages allow for the system to identify

where it exists in space, any QR code model that is nearby and the ability to display any that it can see. It will be assumed the device has an on board camera and some way of detecting motion.


    When the device is initialised it positions a virtual camera in a scene at coordinates (0,0,0). The variables that are required to initialise the virtual camera's frustum such as field of view and clipping planes are taken from the properties of the physical camera on the device. The camera frustum denotes the total area the camera can see from it's position.

Page 02 of 4