Neuroergonomics and brain-computer interfaces: what is it, what can you do with it?

Mathias Vukelic 18. November 2021 Comments

Nowadays, it’s hard to imagine everyday life without the help of smart technology. We are increasingly encountering technical systems that, within a certain framework, make decisions on their own and perform actions without human intervention, e.g.B. Service and care robots, self-driving vehicles or chatbots. Sensors, advancing digitalization and artificial intelligence technologies are making an enormous contribution to this and can be seen as enabler technologies on the way to ever more "autonomous and intelligent technology". Through technology innovations, activities and processes can be increasingly automated. Neuroergonomics ensures that the human does not have to adapt to the technology, but that the technology is adaptable and human-centered. When technology is more responsive to people, it increases its acceptance and therefore its effectiveness and success – no matter the industry or use case.

The merging of smart technology into existing living and working environments also brings new challenges and potential for conflict. The value of technology will not be primarily that it can take over human work; rather, the future will be characterized by close cooperation with it. Technology must be designed in such a way that it is generally comprehensible and socially accepted. Neuroergonomics and brain-computer interfaces could become the crucial "translation aids" between humans and machines here. Questions that can be answered by it are z. B.

  • How to determine how high stress levels are in different work activities to counteract this?
  • How can I create a positive user experience for my product or service among customers??
  • How can a robot adapt individually to the person working with it??

We address such issues and many more around human-technology interaction as a team through neuroergonomics.

What can neuroergonomics achieve?

In order to address and answer the challenges, problems, and issues of an increasingly rapidly changing and technologized world of work and life, neuroscientific methods are needed to measure mental factors of user experience (UX), usability, and technology acceptance, in addition to psychological methods. Voice-, gesture-, or even thought-driven technologies are gradually changing not only the way we interact with technologies, but our skills and capabilities are also being impacted in different ways. The practical transfer of neuroscience methods will be of increasing importance to companies to improve product development, process management and work steps in terms of a positive experience and to make them human-centered.

In this wake, neuroergonomics is a research discipline with great potential, including for commercial practice. The research field lies at the interface between psychology, cognitive neuroscience, work science and artificial intelligence. Two basic goals are pursued:

  1. 1. Gain a better understanding of human performance and decision-making abilities and underlying brain functions,
  2. 2. To promote physical and mental well-being at work and in everyday life.

To this end, our team is researching the automatic recognition of cognitive, i.e. thought, and emotional processes in order to be able to use them to improve technology and thus create a new milestone in the interaction between humans and technology. We apply theories and methods of neuroergonomics to understand the underlying psychological processes during everyday life. For example, to use knowledge of how the human brain works to develop secure and convenient interfaces to technical systems. Application areas are z. B. aviation, control and monitoring tasks, (partially) automated driving, marketing and product evaluation, or the interaction of humans with computers, as well as further autonomous and adaptive machines or robots. Our research spectrum ranges from the study of human perception and information processing, decision making and mental stress to emotional experience.

Product, acceptance and technology evaluation via neuro-methods box

A central component of our research are mobile, neurophysiological and camera- or sensor-based methods to draw conclusions about the attention, emotions, cognitive load, product effect, acceptance, but also the general well-being of a user based on the measured signals. Thus, we have developed a flexibly applicable method box – methods of neuroergonomics and positive UX – for product and technology evaluation. This allows us to offer the most objective evaluations possible, empirically based observations, tailored neuro-user tests, and, if desired, training on the methods that can be used to improve products, services, and workflows. Our approach and results can already be used in numerous application areas, including, for example:

  • User Experience Testing – z. B. of digital services and user interfaces,
  • Product impact – z. B. of food and consumer products,
  • Health – e. B. Mind and body states in patients,
  • Workload – z. B. Operators of safety-critical infrastructures,
  • Human-technology interaction in general – z. B. Driver state recognition and collaboration with robots.

Table 1: Overview of features and application areas of most common recognition technologies. (Figure from the study "Sensitive Technology")

Table 1 provides an overview of the key technical issues for the practical application of neurotechnologies: How well can each technology be used in a mobile way in everyday life? How much effort is required in preparation? How good is the temporal resolution of measured signals? Advances in measurement and recognition technologies increasingly allow us to make easily manageable measurements. Neurophysiological signals can also be measured outside the laboratory – under real conditions.

Thinking outside the box – brain-computer interfaces

Computers and machines are increasingly able to learn and make decisions through machine learning (ML). Speech, gesture and facial expression recognition replacing earlier input devices like mouse and keyboard. Advances in neurotechnologies combined with improved ML algorithms allow actions to be automatically inferred from captured neural signals. A key invention here is the brain-computer interface (BCI). A BCI uses sensors to measure brain signals, z.B. the EEG. ML algorithms process EEG signals and interpret them so that a computer can then use them to take an action. Since its beginnings in the 1970s, BCI research has focused primarily on clinical and medical applications. The main goal was to provide users with physical or sensory limitations with a communication or assistance tool, z.B. For computer-based communication or locomotion for locked-in or stroke patients.

The introduction of further concepts in BCI research pave the way for non-clinical applications. In this, brain signals are recorded as a continuous measurement, which is used to detect cognitive or emotional processes of users* in real time. As a result, the system can react adequately. It thus learns from humans what is needed for optimal collaboration and can adapt to the capabilities and needs of users. Such BCIs enable direct communication between humans and systems, without special effort on the part of humans. It facilitates and improves human-technology interaction and allows to perform the task to be done in the best possible way with the support of a technical system.

Using practical example scenarios and visions of the future, we are researching as a team what potential and added value this offers for the development of adaptive and autonomous assistance systems. Application examples are:

  • physiological monitoring systems for driver condition detection,
  • neuro-adaptive learning assistants and programs,
  • the use of BCI to improve training of robotic behavior during interactions with humans.

And don’t worry: you definitely can’t read minds with BCI. The content of thoughts and the physiological signal of a thought are two pairs of shoes. So we are far from science fiction :-).

  • Applied neuroscience: why future technologies will come from brain research
  • NeuroLab of the Fraunhofer IAO
  • Neuroscience for a better UX
  • Services around sensitive technology
  • Blog post: Neuroadaptive technologies – when technology becomes more sensitive
  • Video: "Lab-Live: The NeuroLab in Practice Check."

Mathias Vukelic

Dr. Mathias Vukelic is head of the Applied Neurocognitive Systems team. In his research, he addresses the question of how digital technologies and intelligent human-machine interfaces must be designed so that users can better deal with information – i.e. learn better or make better decisions. Specific challenges and questions for him here are how high the cognitive load is, or even what role emotions play in dealing with technology.

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: