Apple’s VR or AR headphones can move a user’s avatar based on controlling the user’s body movements, while their battery life can be lengthened with some clever data transmission techniques.
Apple is believed to be developing virtual reality or augmented reality headphones, as well as RA smart glasses known as “Apple Glass”. Entering a slowly growing human-computer interaction field, Apple is also trying to work out solutions to various problems, in order to distinguish its products from other head-mounted systems.
In a couple of patents granted Tuesday by the U.S. Patent and Trademark Office, Apple believes it can improve what its headphones can offer in relation to its user interaction and how it communicates with a host device.
The first patent, “Generating Body Position Information,” covers the ability of the system to track a user’s movements and then use that data to perform other related actions.
Apple believes that some immersive computer-generated reality experiences require knowing the user’s body posture. In some experiences, the VR or AR application may make changes to what it presents, depending on the user’s position or movements, such as a guard in a game that responds differently to the user’s posture.
More explicitly, Apple suggests that knowing body posture could be used to control a user’s avatar. This can be useful in situations like the popular online social experience VR Chat, which can use the movement of other hardware to alter the movements of the user’s avatar.
According to the patent, Apple suggests that posture could be learned using multiple cameras and neural networks, with several neural networks working together to model the individual joints of the body. Each of the neural networks works individually, but the results feed together to create the whole-body model.
The training would be carried out in various ways, not only with the understanding of the data of the cameras being taught, but also with the training on how the networks interact with each other. This includes “determining respective topologies of the branched plurality of neural network systems.”
As for the amount of information the net can get about a person’s posture, controlled joints are said to cover the “neck region, shoulder joints, elbow joints, wrist joints, pelvic joint. , knee joints, ankle joints and knuckles “. The edges, elements connected between joints, are also determined by the system.
The list covers all the essentials needed to create a model of a user’s body, even if it doesn’t have a high level of detail. For example, while “knuckles” are mentioned, probably as a key element for gestures, toes or foot movements are apparently not covered to the same degree.
The patent lists its inventors as Andreas N. Bigontina, Behrooz Mahasseni, Gutemberg B. Guerra Filho, Saumil B. Patel and Stefan Auer. It was originally presented on September 23, 2019.
The second virtual reality-related patent, “Adaptive Wireless Transmission Schemes,” aims to manage communications between a headset and a host computer.
While all-in-one systems like the Oculus Quest exist, they can add weight to a configuration, and while connected configurations can reduce the weight, the cable itself can be a problem. One answer to this may be to use a wireless communications system that reduces weight and eliminates cable.
However, even wireless systems can cause problems. For example, it consumes power to transmit data, as well as wireless systems that typically have a lower amount of available bandwidth than a wired system. There are also problems inherent in interference and other disruption problems.
In Apple’s patent, it suggests that a wireless system could be used, but to reduce the amount of data being transmitted at a time, the system should send less video data. Instead of data covering two full streams, one for each eye, Apple wants to halve the amount of bandwidth used.
It plans to do this by using interlaced frame transmissions for left eye and right eye data. Once the frame data for both eyes is received, the system displays them to the user.
To further improve the system by reducing the data transmitted, each frame does not need to be complete. Apple suggests that partial frame transmissions could be sent, which could cover items that need to be updated urgently and allow reuse of previous frame elements that covered outdated sections.
It also mentions eye tracking, which can also play into the system. By knowing where the user is looking, these areas can be prioritized for updates, while using very small display techniques could reduce the amount of detail needed for parts of a screen that are not being looking, again saving data.
Such a system could offer extremely high frame rates, of “at least 100 frames per second using a shutter,” although it may not be as high as you think. A claim has received frames at “less than 60 frames per second,” but because the frames of the eyes are displayed in alternate eyes, you can do so at a rate in excess of 100 frames per second.
Other elements include monitoring wireless links and using multiple connections to maintain high levels of bandwidth and data transmission times.
The patent was originally filed on May 14, 2019 and was invented by Aleksandr M. Movshovich, Arthur Y. Zhang, Hao Pan, Holly E. Gerhard, Jim C. Chou, Moinul H. Khan, Paul V. Johnson, Sorin C Cismas, Sreeraman Anantharaman and William W. Sprague.
Apple submits numerous patent applications on a weekly basis, but if the existence of patents indicates areas of interest for its research and development teams, it does not guarantee that they will appear in a future product or service.
Previous patent applications
Apple has numerous patent applications in the field of virtual reality and artificial reality, and there is some crossover with the last two patents.
For example, it has looked at the most sophisticated screens in the past, including a 2019 patent in which a system could offer high headphone upgrade rates. In 2020, he explored a similar system that used eye tracking to decide which sections of the screen to launch resources at, through eye detection.
Gesture recognition has floated a few times before, with a 2018 patent explaining how cameras under a headset could control a user’s hands to manage 3D documents. In 2015 it was thought that machine vision could recognize the gestures of human hands at a distance.
Stay up to date with all of Apple in the weekly AppleInsider Podcast and get a quick update on AppleInsider Daily news. Just say “Hey, Siri” to your HomePod mini and order these podcasts and also our latest HomeKit Insider episode.
If you want a non-ad-free AppleInsider Podcast experience, you can support the AppleInsider podcast by subscribing for $ 5 a month through the Apple Podcasts app or through Patreon if you prefer any other podcast player.