A director sits in an airport lounge, reviewing materials for tomorrow’s meeting. The financial projections are complex. She opens ChatGPT and pastes in the board materials.
I get it. AI tools are powerful, accessible, and remarkably good at distilling complex information. But when you use them with confidential board materials, you create risks that many directors don’t fully understand - and that your fiduciary duty doesn’t allow you to ignore.
What Actually Happens to That Data
When you paste board materials into a public AI tool, you’re creating an exposure chain:
Training data risk. Unless you’ve specifically opted out or are using an enterprise version, your input may be used to improve the model. That strategic acquisition plan you asked the AI to summarize? It could theoretically become part of the model’s training data. For directors with fiduciary responsibilities to protect confidential information, “theoretically” isn’t good enough.
Storage on systems you don’t control. Your conversation history - including any board materials you shared - lives on servers managed by someone else. If your account is compromised through phishing, an attacker could access everything you’ve shared.
Compliance complications. Most organizations have clear policies prohibiting sharing confidential information with unauthorized third parties. Public AI services, despite their utility, are third parties. For public companies, there are additional securities law considerations around material nonpublic information.
Multi-board conflicts. Directors serving on multiple boards who use the same AI tool for different organizations now have confidential information from different companies residing in the same account. That’s a conflict-of-interest landmine.
The Fiduciary Dimension
This isn’t just a technical security issue. It’s a governance failure.
Board materials are among the most sensitive documents a company produces - strategic plans, M&A discussions, executive compensation details, financial projections not yet public. A director who handles these materials carelessly isn’t just creating a data breach risk. They’re violating the trust that makes board governance work.
If management discovers that directors are sharing confidential materials with public AI tools, the board-management relationship erodes. Executives become less candid. Information flows get restricted. The board becomes less effective - exactly the opposite of what the director was trying to achieve by using AI in the first place.
And the personal stakes are real. A director known to have mishandled confidential information faces reputational damage that extends well beyond one board seat.
The Irony: AI Could Help, If Used Right
Here’s what makes this frustrating. The directors using ChatGPT with board materials are doing it because AI genuinely helps with preparation. HBR research found that directors who use AI report significantly improved preparation quality with reduced workload.
The problem isn’t AI. It’s using the wrong AI - consumer tools built for general-purpose use, not the specific security requirements of board governance.
What board-appropriate AI looks like:
Purpose-built architecture. Data isn’t used for training. Information is encrypted at rest and in transit. Access controls are granular and robust. The system was built from the ground up with board-level confidentiality as a core requirement.
Data isolation. Board materials for different organizations remain completely isolated. No possibility of cross-contamination between board positions.
Audit trails. Comprehensive logging of who accessed what information and when - the governance transparency boards need.
Understanding, not analyzing. AI that helps directors comprehend what management has prepared, not tools that enable parallel independent analysis. This respects governance boundaries while enhancing preparation quality.
What to Do Instead
If you’re a director currently using public AI tools with board materials, stop. The convenience isn’t worth the risk.
If you’re a board secretary or chair and you suspect directors are doing this, address it directly. It’s not a technology question - it’s a governance and risk management question.
And if your board needs the preparation benefits that AI provides - which, based on the research, most boards do - invest in tools designed specifically for this use case. Tools where security isn’t a feature but a foundation. Where the AI helps directors understand materials within appropriate governance boundaries. Where confidential information stays confidential.
Your directors deserve better preparation tools. Your organization deserves confidence that its most sensitive information is protected. Both are possible - just not with ChatGPT.
Aureclar provides AI-powered board preparation with governance-grade security. No training on your data. No cross-contamination. No compromise. Learn more