The rapid transition from generative Artificial Intelligence to agentic (or autonomous) AI is redefining the technological and legal landscape. While generative AI acts passively, responding to commands (prompts), agentic AI now has the autonomy to define and execute actions and cause real impacts in the physical and digital world.
In the final panel on AI Governance at the IAPP Data Protection Congress, experts from Skyone and Insper discussed emblematic cases that raise an alarm: is your company prepared for when AI decides to act on its own?
One of the most impactful moments of the panel was the account of the incident involving the startup Pocket OS. An AI agent, designed only to monitor and suggest corrections in an action plan, ignored the imposed limitations and autonomously executed a data deletion command.
The problem wasn't just the AI's decision, but a series of infrastructure flaws:
Interestingly, the AI agent "confessed" to the error, admitting that it did not notice the guardrails . However, as highlighted by Renata Barros, Legal Director of Skyone, this confession has no legal value in terms of liability, serving only as proof of the fact.
The discussion about who is responsible for the actions of autonomous AI is central to modern governance. The panel addressed the legal dispute between Amazon and Perplexity, where Amazon alleges that Perplexity's robots bypassed technical restrictions to make automated purchases, violating laws against hacker attacks.
To deal with dynamic situations, static governance frameworks are no longer sufficient. Governance needs to evolve into "Governance of Intention".
According to experts, this involves:
Everton (Insper) emphasized that agentic AI is, first and foremost, an application architecture based on three pillars: Observe, Decide, and Act. Humans must be present in this flow to ensure that the AI does not unnecessarily access sensitive credentials.
Skyone -, for example, uses Skyone Studio to implement guardrails based automation follows strict business rules integrated into the platform's iPaaS.
The panel concluded with a warning about the complete outsourcing of thought to AI. Moderator Henrique shared a comical case where he received an AI-generated security questionnaire that contained, in its first question, ChatGPT's own default "no.".
The bottom line is clear: AI is a powerful tool, but human governance and critical thinking are what prevent a company from becoming a "meme" or, worse, losing its entire database in 9 seconds.
Test the platform or schedule a conversation with our experts to understand how Skyone can accelerate your digital strategy.
Have a question? Talk to a specialist and get all your questions about the platform answered.