Software, hardware and the real world: how an ar system is built

The quality of an AR application depends largely on the individual components and their interaction. So that you too can use augmented reality successfully, we explain the most important basics about the components of an AR system.

Software, hardware and the real world: how an ar system is built

What components make up an AR system? Since an AR system is a computer-based system, the answer seems obvious: from different hardware and software building blocks. But in addition to these obvious components, the real world can also be understood as part of an AR system. Finally, in an augmented reality application, real and virtual content are inseparable.

To simplify the understanding of the interrelationships, the graphic below shows the abstracted structure of an AR system:

The central hardware components of an AR system are composed of sensors and cameras, respectively. Cameras, processors and displays together:

Sensors and cameras – the perception organs

Sensors and cameras can be understood as the perception organs of the AR system. They capture reality and provide data needed for tracking and pattern recognition. The more precise and high-quality this detection is, the better the basis on which the virtual content is later integrated.

Sensors can also provide further information about the environment or the state of a machine. These include z. B. Data on temperature, pH or lighting conditions. Especially for service and maintenance in the context of Industry 4.0, the inclusion and evaluation of additional sensor data offers decisive added value.

The collected data is transferred to an AR application installed on the device.

Tracking module

The tracking module checks whether a known pattern is contained in the transmitted data. If a target object is detected, this information is passed to the scene generator.

Scene generator

The scene generator combines real and virtual world with the help of a render engine. To do this, he accesses a repository that contains all virtual content. A combined scene is generated based on the information from the tracking module and the repository. The rendered scene is then shown on a display.

Displays – The appearance

The display can be done via conventional visual displays, such as screens, AR glasses or projectors, but also via multisensory displays that include other sensory impressions, such as haptic or smell.

Processor and graphics unit -The brain

Processor and graphics unit play a central role in the entire processing of data, execution of actions and interactions, as well as the display. In many mobile devices, these two components are combined in a single component as an APU. The APU interacts with the AR application and carries the "workload". Due to its functionality, it can be understood as the brain of the system.

The resilience of the brain is crucial to ensure that all processes and interactions can be processed and evaluated quickly. The quality of the display also depends on the brain. If it is overloaded, there is a drop-off in the display, even when using an excellent display, and the rendered content appears jerky and delayed.

Which end devices come into question?

Augmented reality can be used on a variety of devices – from AR glasses to head up displays, or HUDs, z. B. windshields or helmets, to smartphones and tablets. Currently, AR is most commonly used on smartphones and tablets.

Basically, end devices can be divided into 3 categories based on their design and functionality: Optical see-through, video see-through, and projection-based AR.

Optical-See-Through

Optical see-through displays allow the user to see through to the real environment. The user sees his environment through a semitransparent projection surface. The supplementary AR content is integrated directly into the user’s field of vision via this projection surface. Typical output devices of the optical see-through type are the Microsoft HoloLens and other AR glasses such as the Epson Moverio BT-200 or the Meta 2.

Video see-through

Unlike optical see-through displays, video see-through displays do not allow you to see through the output device. Instead, the outside world is recorded in a video. AR content is integrated into this video stream. Not only the AR content is output via the display of an output medium, but also an augmented video stream. This creates the impression of being able to see through the output device. Typical end devices of the video see-through type are smartphone, tablet, but also AR glasses such as the Google Glass or the Vuzix M4000.

Projection-based AR

In contrast to video see-through and optical see-through displays, AR content is projected directly onto a real physical object in projection-based AR applications. There is no display medium between the user’s eye and the real environment.

Which hardware is suitable?

Which hardware is suitable for an augmented reality project depends largely on the target group, the application context, and the available budget. The table below shows the pros and cons of the most common AR endpoints:

Software, hardware and the real world: how an ar system is built

Smartphone/tablet, video see-through

Per

  • Widespread end device
    A large number of potential users use the end devices/can use the end devices and are experienced in their handling.
  • Good price-performance ratio

Contra

  • No free hands without special design

Software, hardware and the real world: how an ar system is built

Smart glasses, video see-through

Per

  • Free hands

Contra

  • Cost
  • Less available and known among users
  • Current development status is insufficient for many application scenarios

Software, hardware and the real world: how an ar system is built

Smart Glasses, See-Through

Per

  • Free hands
  • Fade-in directly into the field of view

Contra

  • Cost
  • Less available and known by users
  • Current development status is insufficient for many application scenarios
  • Weight

In addition to the hardware and the basic components of an AR application, the creation software is also of crucial importance.

The different types of creation software

In the area of creation software, there are a large number of different providers. The offer ranges from comprehensive all-in-one solutions to "free" handsets Development environments in which the individual software components can be combined with each other as desired.

Complete solutions such as Reflect One or Vuforia Studio offer a defined development environment in which frequently required functionalities are available in a standardized manner. AR applications can be implemented using such systems without programming skills and with little time required. In addition, they are specially designed for industrial use and offer functionalities for linking to other systems such as CCMS, SAP and PIM. However, the existing functionalities are limited and can usually only be extended by the software manufacturer.

Other development environments such as Unity 3D or Unreal Engine offer significantly more freedom. Different SDKs can be flexibly integrated here. With regard to the available functionalities, there are hardly any restrictions and everything can be defined by the user himself. However, this definition is also a must. This has two disadvantages: On the one hand, a significant additional effort is required for the creation of the video. On the other hand, programming skills are mandatory for the implementation of all functions and interaction options.

Which solution is the most suitable?

Many technological decisions have to be made when designing an AR system. The design of the AR system and the decision for a development environment are key milestones that significantly influence the final result and the working method. With regard to hardware and software, there are some generally applicable guideline values that can be used for orientation: The processor and graphics unit should be powerful, and the perception organs are expected to capture the real environment as precisely as possible. But what exactly does this mean?

Furthermore, basic recommendations do not exist for all aspects. Which setup and development environment is optimal for an AR project depends on many factors, so no blanket answer is possible.

You ask yourself how to build the optimal AR system for your use case and how to combine individual components. we are at your side as a consulting and realization partner.

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: