When it comes to AI, governance and regulation is still in its early stages. The first three considerations in the AI governance framework should be knowledge, strategic alignment and accountability.
An organisation must possess sufficient knowledge to oversee the use of AI effectively, develop AI objectives that are compatible with the wider strategic direction and ensure that senior, named individuals are ultimately accountable for the success, failure, benefits and harms of AI use.
A formal AI management framework will support organisational leaders to oversee the use of AI.
Knowledge
Organisational leaders need to ensure they have sufficient domain knowledge to play their part in AI governance. This will include:
- Keeping informed about any use of AI within their organisation.
- Keeping up to date with new use cases and technological advances; this may involve a formal horizon-scanning programme or regular briefing by experts.
- As an AI programme is put in place, collecting feedback from people involved in the organisation's AI systems with regards to beneficial outputs, potential risks and actual harms, ethical dilemmas, and operational barriers to success.
- Learning from this feedback so that improvements to the way AI is managed responsibly can be made.
Strategic Alignment
Allowing early experimenting with different AI systems by enthusiasts is perfectly sensible while the potential for AI is being explored initially. However, organisational leaders should quickly move to a situation where investments in AI:
- Align with the organisation's strategic goals and ethical positioning.
- Take account of the needs and expectations of stakeholders.
- Protect against any disruption that the use of AI by competitors could present to those goals.
In addition, leaders should look out for and respond to any indications that a particular AI system could be deliberately or accidentally used in circumstances beyond those originally conceived and in ways that could be beneficial or harmful.
Accountability and Responsibility
Accountability structures should be in place so that an appropriate senior person or persons (e.g. the board) are formally held accountable for the outcomes of the use of AI. There should be no attempt to delegate blame to technology, junior employees (unless they have been negligent), or the organisation as a whole if things go wrong.
A separate team, generally less senior, should be made responsible for the operation of AI. This means they are tasked with ensuring that it is effectively and ethically implemented. Even where using AI is purely experimental, this should involve making plans so that resources can be allocated, success metrics put in place, any necessary training put in place, and processes are developed for identifying and managing ethical problems and any risks that appear.
"Technology" cannot be held responsible for poor systems, processes or security.
While people who are responsible for AI can also be accountable, there should be a clear distinction made between these two concepts. Responsible individuals should be given the opportunity to succeed (e.g. through training) and accountable individuals should know that they will be judged based on how they have supported the individuals who are responsible for AI, and how they have promoted the use and management of AI.
There should be strong and documented lines of communication between senior management, accountable individuals and the teams responsible for AI. This communication should function in both directions with responsible individuals able to express concerns to leadership.
Want to learn more? We have a 4-hour course, AI Governance and Risk Management, here!
You need to sign in or register before you can add a contribution.