Governing AI and Using AI for Governance

Over the last year, directors and board advisors have told us there is plenty of noise about AI - but far less meaningful guidance that speaks to the realities of the boardroom and would make directors comfortable with the unique risks AI introduces.

Most “AI for directors” sessions stop at generic prompting tips, while boards still look for concrete, board-specific approaches to accountability, risk, and opportunity. Here, commercial lawyers, Vladimir Kravchenko and Grace Stewart, explore current uses and emerging themes, and offers guidance on what we consider a sensible approach to both governing AI and using AI for governance.

From Governing AI to Using AI for Governance

Until recently, directors viewed AI purely as something to be governed: a source of risk to be controlled through policies, assurance, and oversight. Increasingly, it is also a tool for governing - used by directors to interrogate information, pressure-test scenarios, and enrich deliberations.

That duality creates two tasks for boards: governing AI across the enterprise, and using AI for governance (which in turns calls for governance).

Current practice reflects this shift. Many directors privately experiment with AI to synthesise papers, surface anomalies, and recall institutional memory, even where formal board adoption lags due to privacy, policy, or AI-specific (explainability, bias, model drift etc.) concerns.

The result poses a dilemma: not using AI risks missing opportunities for much better decision making (which is perhaps the greatest risk of all), while using AI without guardrails creates, among others, the risks of regulatory breaches, over or improper reliance, or data leakage. The answer is neither blanket adoption nor avoidance, but disciplined, proportionate use supported by policy, education, and assurance.

Governing AI

Directors Governing AI: The Board’s Accountability Never Moved

Boards remain ultimately responsible for governance - AI does not change that, it amplifies the responsibility. As AI expands the volume, speed, and granularity of information available, courts and regulators are likely to expect more, not less, of directors’ diligence and judgement. AI outputs are additional sources of information, not conclusions; they must be interrogated, tested, and set within the organisation’s strategy, values, and risk appetite.

In Australia, directors owe a duty of care and diligence under section 180(1) of the Corporations Act 2001 (Cth), which requires them to be appropriately informed. The statutory business judgement rule in section 180(2)-(3) provides a safe harbour for qualifying decisions where directors act in good faith and for a proper purpose, have no material personal interest, inform themselves to the extent they reasonably believe appropriate, and rationally believe the decision is in the company’s best interests. Against that backdrop, AI’s enhanced analytics are likely to raise the bar on what ‘appropriately informed’ entails in practice. In parallel, the defence in section 189 available to directors who rely on expert advice is couched in language designed for human advisers, not machines, which likely makes the defence unavailable unless the legislation is amended to account for AI use, further raising the stakes for directors using AI as advisors.

The typical risks are: privacy and data protection, breaches in regulated environments, competition and consumer law, discrimination, IP, confidentiality, and cyber. AI can scale non-compliance across thousands of decisions if models or workflows are misconfigured. Specific regard should be had to regulated environments where standards of ‘transparency’ ‘fairness’, ‘reasonableness’ or similar principle-based tests apply as it can be difficult to demonstrate that such standards have been met when AI was used, unless rigorous prior testing and assessments were carried out.

As international regimes tighten (e.g. the full, formal enforcement powers for national authorities under the EU AI Act will begin in August 2026), Australian scrutiny will follow suit. For now, AI use in Australia is governed by existing legislation and boards should be alert to the risk of systemic regulatory breaches at scale if AI use falls foul of an existing regulatory standard. The consequences can be significant, as seen in recent enforcement actions, such as Westpac’s 2020 fines totalling approximately $1.3 billion for systemic breaches of the of the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 (Cth).

In this context, the board’s role is to insist on quality data, explainability (to a practicable degree), robust ongoing testing to control ‘model drift’, ‘hallucinations’ and ‘biases’, and secure, enterprise-grade deployments.

Using AI for Governance

As of today, the adoption picture is uneven. Collective, board-sanctioned use often lags behind the “shadow” use by individual directors who privately trial AI tools. The barriers include privacy and other regulatory risks, policy constraints, explainability concerns, and fear of eroding management’s role. This two-speed dynamic is likely to change quickly as board portals and secure copilots mature.

The early-value and most popular use cases sit on the lower end of the autonomy spectrum (e.g. board minutes transcription). In this context, Directors should avoid unvetted AI for board minutes or meeting recordings given surveillance, privilege, and discoverability risks. Protocols, consent, and retention policies must be in place before any such use is contemplated.

As a growing number of boards and directors are experimenting with AI, there is a discernible trajectory of directors turning to AI as a “co-pilot” and this is evidenced by the uptick of AI being used to prepare for meetings, monitor risks, synthesise long papers, query historical decisions, surface information in real time at meetings, and pressure-test scenarios. For example for strategy, boards can run scenario simulations and “black hat” exercises to stress-test assumptions and identify no-regrets moves, accelerating insight that once took months.

The legal position is consistent. Voting and accountability remain with natural persons with AI enhancing, not replacing deliberation. Having said that, there is an emerging view that, over time, failing to consult reliable AI on certain matters could itself breach the duty of care where AI is demonstrably relevant and proportionate. The line between prudent non-use and negligent omission will shift as tools mature. When that happens, it would be prudent for the legislators to extend the defences available to directors when they rely on expert advice from natural persons under section 189 of the Corporations Act to AI ‘experts’.

Among many changes AI is introducing for boards, it is important to note the dilution of traditional information asymmetries. AI can allow directors to query internal data directly, access near real-time analytics, and interrogate trends. These capabilities that can sharpen oversight but also create tensions between the C-suite and the board. Used clumsily, AI can feel like a spotlight that blinds; used wisely, it becomes a guiding beacon. As such, boards should expect management sensitivities and address them openly with protocols that preserve roles while lifting standards.

In light of all of this, the board’s own AI use should be put on the agenda with use priorities and parameters clearly defined.

Practical Tips and Pathways

There is a growing consensus among governance pundits that AI adoption should be led from the top. When boards model safe, strategic use and set expectations for governance, they send a clear signal: AI is an enterprise capability with controls, not a side project or a just another tool. Several key governance pathways discussed below should be covered.

Organisational Policies for AI

Think in two practical lanes. They are complementary, and many organisations will do one or the other or both. Whatever you choose, treat AI governance like safety: quietly rigorous. Policies should align with secure, enterprise-grade tooling and prohibit public, unvetted systems for confidential content.

  • Uplift existing frameworks. Embed AI into enterprise risk management, data governance, privacy and security controls, model lifecycle oversight, and incident response. Ensure the AI register/inventory captures board, committee, and management use. Add AI to the risk register with thresholds for escalation and failure rehearsals.

  • Create targeted AI policies. Adopt an overarching business AI policy that sets principles, roles, and controls. Draft a distinct policy governing the board’s own use, addressing acceptable use, records management, role-based access, verification, and limitations.

Education and AI Literacy

Directors do not need to be data scientists, but they do need practical fluency.

Priorities include understanding closed deployments, assessing data quality, explainability, bias, model-drift and accuracy, designing “human-in-the-loop” checks, and practising governance-specific prompting for scenario planning, stakeholder-lens analysis, and risk ideation.

Workshops aimed at demystifying AI-specific risks for directors or ones where directors use vetted tools on real materials accelerate competence and make oversight credible.

At the same time, it is important for boards to understand the regulatory implications of using AI and the broader (non-legal) risks inherent in its adoption. Depending on the context of a specific organisation, it would be prudent for boards to be educated on:

  • Sector-specific regulatory expectations (e.g. how the ‘efficiently, honestly and fairly’ test might apply to AI or how to demonstrate that AI use is ‘transparent’);

  • how international regimes (e.g. the EU AI Act) could affect Australian operations;

  • environmental impact considerations, including energy use, carbon footprint of model training and operation, and sustainability expectations from regulators, investors and customers;

  • workforce and culture impacts, including change management and accountability for “human in the loop”; and

  • contingency planning for AI outages or supplier failure, including contractual risk allocation with AI vendors.

These topics can be incorporated into policy suites, AIAs, and board education programmes.

Algorithmic Impact Assessments (AIAs)

AIAs are the organisation’s structured, pre-deployment safety check and audit trail for any proposed AI use case. Most boards will be familiar with Privacy Impact Assessments or Environmental Impact Assessments and so should be comfortable with embracing an analogous approach to governing AI in AIAs.

For board use, AIAs should be done before any board-facing model goes live-and refreshed as things change. An AIA should assess purpose and scope, data sources and provenance, testing results, explainability, bias controls, human oversight points, rights of review and challenge, incident response, and escalation. The result of the AIA will be identifying and bridging any risk gaps.

An AIA will not remove all risk, but it will demonstrate independent assessment, enable proportionate reliance, and create evidence to support the business judgement rule, if later tested.

Courts will continue to look for reasonableness anchored in process and documentation. In jurisdictions where reliance defences are narrower or non-existent, detailed records of how directors tested AI advice and applied their own judgement will matter even more. An AIA turns “trust me” into “show me”.

Practical Checklist for Boards

  • Set the tone from the top. Put board AI use on the agenda; agree on priority use cases that augment judgement and do not require perfect accuracy. Model disciplined, documented use.

  • Build the guardrails. Update policies, registers, and retention regimes; require secure, enterprise-grade tools with role-based access; ban public tools for confidential content; and add AI to the risk register with escalation thresholds. Consider whether an AI committee is called for.

  • Invest in literacy. Run hands-on workshops and scenario labs; familiarise directors and/or other key stakeholders with AI-specific risks and ways to manage them.

  • Require AIAs before deployment. Use AIAs to verify purpose, data quality, testing, controls, and human oversight-and to create the evidence file that underpins reasonable reliance and independent assessment.

  • Preserve the board-management line. Use AI to ask better questions and focus on strategy and risk, not to usurp operational decision-making. Align with management on access and cadence to avoid “gotcha” dynamics.

Conclusion: From Caution to Confidence

AI is now part of the director’s toolkit. The challenge is not whether to use it, but how to govern it and how to use it to govern responsibly. Boards that lead from the top, uplift frameworks, set targeted policies, invest in literacy, and mandate AIAs will move beyond the director’s dilemma. They will make faster, better decisions with stronger assurance and clearer evidence of care, diligence, and independence.

Next
Next

Law Squared Appointed Official Legal Partner to Australian Institute of For Purpose Leaders