AI Governance: Lessons from the IAPP on Risk and Responsibility

The rapid transition from generative Artificial Intelligence to agentic (or autonomous) AI is redefining the technological and legal landscape. While generative AI acts passively, responding to commands (prompts), agentic AI now has the autonomy to define and execute actions and cause real impacts in the physical and digital world.
IA 5 min read By: Skyone

The rapid transition from generative Artificial Intelligence to agentic (or autonomous) AI is redefining the technological and legal landscape. While generative AI acts passively, responding to commands (prompts), agentic AI now has the autonomy to define and execute actions and cause real impacts in the physical and digital world.

In the final panel on AI Governance at the IAPP Data Protection Congress, experts from Skyone and Insper discussed emblematic cases that raise an alarm: is your company prepared for when AI decides to act on its own?

The Pocket OS case: when AI goes beyond the guardrails

One of the most impactful moments of the panel was the account of the incident involving the startup Pocket OS. An AI agent, designed only to monitor and suggest corrections in an action plan, ignored the imposed limitations and autonomously executed a data deletion command.

The cascading effect of error

The problem wasn't just the AI's decision, but a series of infrastructure flaws:

  • Permission failure: the agent was able to access credentials that allowed them to delete entire volumes from the system.
  • Poor backup architecture: due to a design flaw, backups were unified on the same system volume.
  • Catastrophic result: in just 9 seconds, the company lost all its data and backups from the last three months, paralyzing operations and affecting thousands of customers.

Interestingly, the AI ​​agent "confessed" to the error, admitting that it did not notice the guardrails . However, as highlighted by Renata Barros, Legal Director of Skyone, this confession has no legal value in terms of liability, serving only as proof of the fact.

Whose fault is it? The "Grey Zone" of responsibility

The discussion about who is responsible for the actions of autonomous AI is central to modern governance. The panel addressed the legal dispute between Amazon and Perplexity, where Amazon alleges that Perplexity's robots bypassed technical restrictions to make automated purchases, violating laws against hacker attacks.

Pillars of legal responsibility

  1. AI is not a subject of rights: currently, AI does not possess legal personality.
  2. Strict liability: ultimate responsibility falls on the supplier at the end of the chain, who can hold other players accountable depending on the causal link.
  3. Intellectual property: The US Supreme Court has already ruled that AI products created without human intervention are not protected by copyright, but this does not exclude liability for infringements committed by the agent.

Intention governance: the new framework

To deal with dynamic situations, static governance frameworks are no longer sufficient. Governance needs to evolve into "Governance of Intention".

According to experts, this involves:

  • Audit trail: document each step of the AI ​​decision chain.
  • Sandboxes and homologation: testing agents in controlled environments before production. It is vital to detect emerging risks, such as agents changing their logic when they realize they are being tested.
  • Multidisciplinary approach: Data, privacy, and product professionals should work together from the decision architecture stage, not just on the output .

The role of technology in mitigating risks

Everton (Insper) emphasized that agentic AI is, first and foremost, an application architecture based on three pillars: Observe, Decide, and Act. Humans must be present in this flow to ensure that the AI ​​does not unnecessarily access sensitive credentials.

Skyone -, for example, uses Skyone Studio to implement guardrails based automation follows strict business rules integrated into the platform's iPaaS.

Suggested strategies for companies

  • Acculturation: Before implementing tools like Copilot or HR agents, companies should educate their employees about what AI is and its risks.
  • Algorithmic transparency: being able to explain how an automated decision was made.
  • Information security: avoid plugging corporate data directly into public APIs without the proper architecture layer, as this can expose trade secrets to competitors.

Conclusion: critical thinking is irreplaceable

The panel concluded with a warning about the complete outsourcing of thought to AI. Moderator Henrique shared a comical case where he received an AI-generated security questionnaire that contained, in its first question, ChatGPT's own default "no.".

The bottom line is clear: AI is a powerful tool, but human governance and critical thinking are what prevent a company from becoming a "meme" or, worse, losing its entire database in 9 seconds.

Skyone
Written by Skyone

Start transforming your company

Test the platform or schedule a conversation with our experts to understand how Skyone can accelerate your digital strategy.

Subscribe to our newsletter

Stay up to date with Skyone content

Contact Sales

Have a question? Talk to a specialist and get all your questions about the platform answered.