AI in the Netherlands: Talent, data and responsibility

“Netherlands Organization for Applied Science Research [TNO] is the innovation engine of the Netherlands,” said Peter Werkhoven, chief scientist at TNO and professor at Utrecht University. “We turn good science into good application. We were founded by law in 1932, and we solve social and economic challenges through innovation. Our role in the value chain is to see the good academic research and bring it to a level where it can be used by industry and government.

A case in point: TNO is part of a small consortium developing a national artificial intelligence (AI) strategy in the Netherlands that started about six years ago. The goal is to get the country on the road to benefit from AI. “Like many new technologies, AI falls into an innovation paradox,” Werkhoven said. “Universities have a lot of knowledge, but industry cannot turn that knowledge into value without help. This is where we come in with applied research.”

The consortium created a plan to fill three gaps in the Netherlands. The first two are gaps that are directly related to industry players. It’s a lack of AI talent and a lack of data to train AI. The third gap is something that concerns governments – up to EU level: AI must be used more responsibly. “We wrote the plan, and it was funded,” Werkhoven said.

As part of the process of implementing that strategy, TNO not only connects universities, industry and government. It also brings the technology to a level where it can be used. A type of technology developed in this is called hybrid AI, which combines machine learning with machine reasoning. Hybrid AI uses symbolic reasoning in addition to deep learning. This means that it learns patterns, like what is currently deep learning in AI, but also reasons and can explain why it does what it does.

“Autonomous cars and autonomous weapon systems won’t reach their full potential until they incorporate human behavioral criteria into their decision-making — and until they can explain what they’re doing,” Werkhoven said. . “First, we need to give AI the same goals and an adequate set of human values. Second, we want machines to explain themselves. We need to close the accountability gap and the gap of responsibility. This is why we need hybrid AI.”

TNO, for example, is working on hybrid AI for autonomous cars and mobility as a service, which can be personalized for citizens using more complex information systems. TNO also works on predictive maintenance. Meanwhile, digital twins are used to predict a bridge’s defects – and its overall life expectancy – based on all the different sensor data coming from the bridge in real time. This allows maintenance to be scheduled at the optimal time – not too late and not too early.

In the energy domain, TNO is working on smart energy grids, which match supply with demand. In healthcare, it is working with AI to provide personalized lifestyle interventions. Many diseases are related to lifestyle and can be treated or prevented by helping people adjust to it. Systems cannot suggest the same thing for every person. Advice should be personalized, based on a combination of lifestyle and health data. AI identifies patterns in combined data sets, while at the same time protecting data privacy by using secure data sharing technology.

Morality and reason

The work in the Netherlands shines a light on two of the most pressing issues around AI. This is not just a matter of technology, but a question of morality and explanation. In the healthcare sector, many experiments are using AI to diagnose and recommend treatments. But AI does not yet understand ethical considerations and cannot explain itself.

These two factors are also critical in domains outside of health care, including self-driving cars and autonomous weapon systems. “If we want to see the full potential of AI in all these application domains, these issues must be resolved,” Werkhoven said. “AI applications must reflect societal values.”

The first question is how to express human moral values ​​in ways that can be interpreted by machines? In other words, how can we create mathematical models that reflect the morality of society? But the second big question is at least as important – and perhaps even more difficult: what exactly are our values ​​as a society? People disagree on the answer to this second question.

“In the medical world there is more agreement than in other domains,” Werkhoven said. “They create moral and ethical frameworks. During the pandemic, we are about to apply values ​​when the maximum care capacity is reached. These moral frameworks must represent the moral values ​​of society with in relation to a situation.”

Beyond morality is explanation. AI systems go beyond traditional rules-based programming, where coders use programming constructs to make decisions in programs. Anyone who wants to know why a traditional application makes a certain decision can look at the source code and find out. Although it can be complicated, the answer is in the program.

In contrast, the neural network type of AI learns from large data sets, which must be carefully curated, so that the algorithm learns the right things. The neural networks created during the learning phase are then deployed for use in the field, where they make decisions based on patterns in the learning data. It is almost impossible to look at a neural network and know how it makes a given decision.

Explainable AI aims to close this gap, providing ways for algorithms to explain their decision-making. A major challenge is to develop a way to communicate the explanation so that people can understand it. AI may have reasons that are logically correct but too complex for humans to understand.

“We now have AIs like Chat GPT that can explain things to us and give us advice,” Werkhoven said. “If we can no longer understand its explanations but we still follow the advice, we may be entering a new stage of human evolution. We may begin to design our environments without the slightest idea of ​​why or how. “

Leave a Comment