
February 11, 2026
AI Development and Risk Management: Navigating Technology and Law
Implementing AI in an organization today is not merely a technical challenge, but very much a legal and security-oriented one. In a conversation between Lantero and expert Joakim Karlén (in Swedish), we highlight the complex issues that arise when Large Language Models (LLMs) encounter European legislation such as GDPR and the new AI Act.
### Innovation in the US, Regulation in the EU
Technological development is largely driven by American companies, but for Swedish and European organizations, local legislation sets the boundaries. Joakim Karlén notes that the current dynamic is challenging because the pace of innovation is lightning-fast while regulation is brand new. There is still a lack of clear legal precedent and court rulings, which places high demands on an organization’s internal capacity for risk analysis.
### The Clash Between GDPR and AI Dynamics
One of the most central questions is how AI systems—which are by nature dynamic and non-deterministic—can live up to GDPR’s requirements for accuracy. Traditional IT systems are static; you know what you input and what you will get as output. An LLM works differently. By simulating human behavior with a degree of randomness, the output is not always predictable. This creates fundamental uncertainty regarding individual rights and the accuracy of the processed data.
### From Chatbots to Autonomous Agents
We are seeing a clear shift from simple chatbots to autonomous agents capable of performing tasks independently. This introduces new risk vectors. Joakim emphasizes that an organization deploying an AI system is considered a "deployer" under the AI Act and thus bears the legal responsibility. This becomes particularly critical when agents are given the mandate to act without human intervention. The risk of incorrect decisions or random behavior means that traceability—the ability to explain why a machine acted in a certain way—becomes both a technical and legal challenge. Not least when it comes to cybersecurity.
### Internal Risks and "Oversharing"
While many focus on external hackers, one of the greatest risks is internal. The concept of "oversharing" describes when an AI agent, due to a lack of permission management or classification, gives employees access to sensitive information they are not authorized to see. Protecting the "machine" itself and its access to internal data sources is therefore just as important as protecting the raw data.
### Methodology Wins in the Long Run
To succeed, Joakim suggests a methodical approach. Instead of simply "trial and error," organizations should begin with a holistic analysis based on the AI Act, GDPR, and cybersecurity legislation (NIS2). By understanding the purpose of the technology and maintaining control over the information structure, you can build correctly from the start.






