AISEC

Safe and controllable use of AI.

Background and Project Description

Methods of artificial intelligence are no longer used only by experts; they are increasingly being integrated into everyday applications and consumer products—often by developers with limited AI expertise or through externally sourced components. As the number of AI applications has grown, so too has the number of guidelines and legal regulations governing their use (e.g. the AI Act and the Cyber Resilience Act). These regulations call for transparent processes and require systems to be secure enough to withstand cyberattacks. Although there is already theoretical work on how to meet such requirements, to date no easy-to-use toolkit is available for making modern AI systems controllable and resilient (i.e. resistant to attacks).

Current AI research focuses primarily on explainability and trustworthiness. This includes issues such as understanding how AI systems make decisions, ensuring transparency and comprehensibility, and improving overall system safety. However, academic discussions in these areas often remain theoretical and rarely result in practical tools for detecting attacks or making AI systems controllable. This leaves a significant gap between research and real-world security needs, particularly in critical domains such as medicine, as well as in AI services that support research, information retrieval, and data integration. Accordingly, methods that ensure higher levels of security are urgently needed.

Objectives

The main goal of this project is to explore methods that can make AI systems controllable and resilient. This involves pursuing the following sub-objectives:

  • Conducting a literature review and developing an intelligent, easily extendable knowledge base for risk identification and assessment
  • Analysing potential future scenarios that may emerge from the current state of the art
  • Investigating the measures required for secure data integration
  • Examining how controllable AI can be implemented and applied in real-world environments
  • Considering the associated social, legal, and political risks.

Results and Innovation

AISEC gives AI security a new twist by bundling multiple methods into a practical, real-world toolkit that uses modern knowledge structures and analytical approaches. Core elements include knowledge graphs and retrieval-augmented generation (RAG). The latter draws information from external sources and supports AI systems by providing structured and verifiable knowledge.

Use-case analyses, forensic investigations, and penetration tests (controlled security tests in which experts simulate targeted attacks on a system) help to systematically identify risks, improve understanding of security-critical incidents, and address vulnerabilities in a targeted manner.

Another central focus of the project is the robustness and security of data pipelines, which are often error-prone and vulnerable to attacks. Here, the concept of controllable AI plays a key role by framing AI systems in a way that enables them to detect deviations, respond appropriately, and make these responses transparent and traceable.

Beyond technical aspects, the project also addresses the social and legal implications of increased AI control. It examines potential side effects of built-in controllability features, such as unintended deviations from a system’s normative behaviour or limitations to its decision-making autonomy.

Partner

 

Funding

You want to know more. Feel free to ask.

Senior Researcher Institute of IT Security Research
Department of Computer Science and Security
Location: B - Campus-Platz 1
P: +43/2742/313 228 696
External project manager
Gerd Brunner
Benjamin Böck
Partners
  • XSEC
Funding
FFG (Bridge)
Runtime
01/01/2026 – 01/31/2028
Status
current
Involved Institutes, Groups and Centers
Forschungsgruppe Secure Societies
Institute of Creative\Media/Technologies