“Human experience is what makes AI solutions valuable”
Dr Matthias Peissner, Head of the Human-Technology Interaction Research Unit at the Fraunhofer Institute for Industrial Engineering IAO, explains the potential that Artificial Intelligence (AI) offers the working world and how development of good AI solutions can be successful.
Listen to the interview with Dr Peissner (in German):
Interview with Dr Peissner as text version:
Policy Lab: The Fraunhofer Institute for Industrial Engineering IAO is supporting the Civic Innovation Platform. Why is the project of interest to your institute and you personally, and what can you contribute as a cooperation partner?
Dr Peissner: Fraunhofer IAO is a leading institution in the field covering the future of work and work structures; we have been exploring AI for a long time. This means we research solutions ourselves and support companies in introducing AI technologies. In doing so, we always look at the bigger picture: we don’t confine ourselves to economic factors; rather, we see things from the perspective of people and society. For example, I myself have already worked on a platform as part of a large international project. It was about the development of barrier-free IT solutions. Plus, we have our own networks, such as our AI progress centre in Baden-Württemberg. So, we are contributing not only knowledge, but also many contacts. After all, platforms live from the people who use them and from high interactivity.
Policy Lab: You develop strategies for the digital working world not only with companies of all sizes, but also with institutions and other organisations in the public sector. What role does AI play in this regard? Are there any differences in terms of the requirements?
Dr Peissner: We are currently in a phase of searching. Companies and institutions want to know what they can do with AI. In most cases, this involves rationalisation and automation. The question as to how the utilisation of working capacity and resource requirements can be predicted as reliably as possible also plays an important role – particularly since the outbreak of the coronavirus pandemic. Last year, many companies saw extreme fluctuations in capacity utilisation and, hence, personnel requirements. This sort of flexible planning is increasingly being sought and can in the long term only be tackled through intelligent automation. In all of this, I see hardly any differences between the private sector and the public institutions. In the future, however, it will not only be a question of continuously improving or speeding up the way we currently do things, but also of identifying the potential that AI has for disruptive innovation to a far greater extent: what can we achieve from the intelligent integration of different data sources and is it possible to offer interesting services or develop new business models on this basis? Questions such as these will become important.
Policy Lab: Do you have anything concrete in mind?
Dr Peissner: I can offer an analogy rather than an example. Take smartphones and the development that we saw around 10 to 15 years ago. By adding an app store to the phone, Apple suddenly had a completely different product with new business opportunities. That’s the sort of thing I mean. Until now, however, we have had to look across the Atlantic for such innovations.
Policy Lab: How can Germany be reinforced as a technological hub?
Dr Peissner: Fundamentally, I think a lot of good things are already being done. For example, a great deal of effort is going into making German cities more attractive for start-ups. What’s missing is a certain cultural transformation: a failure or trying something out should not be seen as a blot on your curriculum vitae. There’s room for further improvement here.
"We must include the people who will ultimately be using the AI solution in the development and design process. As well as this, the solutions must be explainable. People should understand, at least to some extent, what is ultimately happening. They must also be able to see what data is being used."
Dr Matthias Peissner
Policy Lab: What hopes are you pinning on AI in terms of a fairer and more social working world?
Dr Peissner: Until now, technology has tended to be an obstacle in the employment market. Certain technical skills and knowledge were required for many tasks. AI can help to make technology more accessible, for example through gesture recognition or automated translation. This can give rise to opportunities for interaction allowing people who speak a different language or have physical limitations to work without any restrictions. This will reinforce opportunities for employment. AI solutions can also help to make HR decisions on a more neutral, more objective, and fairer basis.
Policy Lab:When it comes to developing AI, what is important for making it work well for people?
Dr Peissner: It’s first and foremost a question of participation. We must include the people who will ultimately be using the AI solution in the development and design process. As well as this, the solutions must be explainable. People should understand, at least to some extent, what is ultimately happening. They must also be able to see what data is being used.
Policy Lab: How can the people concerned help shape AI solutions? Take, for example, an application for greater accessibility – how can people with physical or cognitive impairments be integrated during development?
Dr Peissner: Ideally, a technological development always starts with a precise analysis of the context in which it’s going to be used. What prior knowledge do the people I want to reach have? What desires, goals, and weaknesses? Through interviews and observations, we can better understand what really matters to these people. That’s the first step. But even after that, developers should not withdraw to their ivory towers but provide the target groups with preliminary prototypes and drafts at an early stage and seek their feedback. The idea underlying human-centric design is that future users are an integral part of the design team. There are also ISO standards that define such processes.
Policy Lab: Can you provide any specific examples from your day-to-day work of socially innovative AI applications being used today to systematically optimise the interface between people, organisations, and technology?
Dr Peissner: Gaze control and gesture recognition are already helping people with motor disabilities to use IT solutions. Another important example is knowledge transfer in companies, particularly in industry. Highly knowledgeable employees are an important competitive factor. However, many of them will be retiring over the next few years. Until now, the only alternative has been to ask these people to share their knowledge so that everyone in the company can benefit and a knowledge database can be created. Such systems were already in existence in the 2000s but they did not work very well as people did not see their primary task as being to write instructions. At the same time, there was also some reticence about sharing knowledge – you didn’t really want to make yourself dispensable. We can lower the barrier significantly with AI solutions.
Policy Lab: In what way?
Dr Peissner: Using sensory and software systems, we can track what these experts do to solve problems. Then we can give them an opportunity of adding their own comments in voice, video, or text form, for example. These small solution strategies that we store there ultimately lead to a knowledge database, where they are checked for functionality, evaluated, and perhaps even processed on a partially automated basis. This is a good example showing what people need to contribute, namely knowledge. Without human knowledge and experience, we’re not going to get very far with even the best AI solutions. However, this calls for a trusted environment. Employees must not be given the feeling that they are providing their knowledge only to be subsequently kicked out of the door. As in many other cases, this shows that it is not so much a question of technical implementation as one of the need for the right culture.
Policy Lab: You have a degree in psychology. How do you assess the potential of AI applications for helping people by offering them psychological assistance?
Dr Peissner: We are currently working on tracking emotional states in real time. In a work context, such insights can help to structure the working day more effectively or to organise it more consciously. You could, for example, determine the best time for performing a demanding task.
Policy Lab: Are machines able to show “empathy” and do such simulated emotions harbour risks for vulnerable people?
Dr Peissner: I definitely think that it would be dangerous to replace a therapy session with an AI conversation. That said, it could be a helpful addition to support the therapy process between sessions by offering certain exercises and ways of thinking, for example. The question as to whether machines will replace people also arises in the nursing field. My opinion is that such discussions are important but should not be allowed to dominate. To repeat, it’s not a question of substituting people but of finding ways for using AI systems to give individuals greater autonomy and empowerment. That’s what we should be concentrating on.
Policy Lab: Many thanks for talking to us, Dr Peissner.