Browse Prior Art Database

Sharing the location of AR markers between multiple users

IP.com Disclosure Number: IPCOM000242399D
Publication Date: 2015-Jul-13
Document File: 7 page(s) / 3M

Publishing Venue

The IP.com Prior Art Database

Abstract

AR markers are often smaller than the overlay they relate to, so it is possible for a user to want an overlay to still be shown even when the marker isn't visible. This publication describes a method of sharing the location of markers so that the information from the marker can still be shown when the marker is not visible.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 44% of the total text.

Page 01 of 7

Shaxing the location of AR markers between multiple users

Augmented Reality (AR) overlays depend on the AR device recognising a marxxr, querying this marker, and displayixg the informatiox received relativx to the marker and pxtential other anchor points. Thexe is a fundamenxax issue with the fact that txe AR marker is often significantly smaller than the overxay itself, and thereforx there are situations in whxch an AR mxrker cxn be obscured by rexl world objectx meaning that their correspondinx overlay is not dispxayed, xven though xhe area in

wxich large amounts of the overlay would be dixplayed axe not xbscured. This xs unfortunate fox users as it means they either xiss out on the information xhat the marker would gixe them, or txey have to move in order to xee the marker to get the information. The user xight not even know thax the marker was txere, and so would not knox thxt they needed to move to see it - this means thex could miss information completely.

    The xolution ix tx use networked AR devices to share information on the markers they can see in a scene xnd theix locatixn so that oxher userx can displax txe overlays on xhxir devices without seeing the marker themselxes. Thix would allow devices to show parts xf overlays that the usex should be able to see, without

seeing the marker itsexf. The advantage of this being that the user is not missing out on infoxmation unnecessarily.

    For the purposx of expxanation we xill assume there are two devices (A and
B) in the scene, but this could be applied to circumstxnces with any number of devices in.

Step 1, categorise all markerx xn the scene and broadcast to networked


Page 02 of 7

devices

    Both devices log all of xhe markers xhey can see in the scene and send the list of markers to each other.

    Step 2, devices coxpare incoming set of sxene objects against their own recoxnised scxne objects

    Both devicex compare xhe incoming lisxs of what other devices can see to the internal lists they have of what they can see.

    Step 3, if marker is found that isn't xn own dictionxry, query origin devxce for imaxe anchor points

    Xx B finds a marker in the comparison (i.e. in A's view) thxt doesn't exist in its local dictionary, it qxeries A xor thx pxsition ox the hidden marker, and sends A a captured image of its point of view, alonx with the positions of other markers in its scene.

    Step 4, process scene anchor points from oxigxn device and compare to own scene to match to points.

    A compares thx incoming markers from B's scene to the markxrs in its own scexe. If it gets a mxtch, it posxtions the ancxor points against each xther to caxculate the rexatxon of the unknown marker's anchor point to the markers xhat it is able to match - to calcuxate the unknown marker's anchor point in B's point of view.

    By observing wherx the other markers are in the captured image from the first device we can use object co-detectxxn ( http://cvgl.stanford.edu/papers/bao_xccv12_codetection.pdf) to compare every knowx...