(1) RMIT endorses a considered and ethical use of Artificial Intelligence (AI), acknowledging that it is increasingly part of the future of life and work and presents both opportunities to benefit and potential to harm individuals, communities, and the environment. (2) RMIT’s endorsed AI assets will not harm human beings nor pose unreasonable safety risks. Adoption of RMIT’s endorsed AI assets should be appropriate and proportional to achieve given legitimate aims, balancing benefits and risks. (3) This procedure establishes the ethical principles and risk-based framework which underpin the safe and responsible adoption of AI to support the functions and activities of the RMIT Group. (4) Authority for this document is established by the Information Governance Policy. (5) This procedure applies to all individuals who use Artificial Intelligence solutions (AI assets) to procure, create, manage, handle, use or process RMIT Group Information, including (6) AI assets managed in compliance the Research Policy are outside the scope of this procedure. (7) The (8) Human, societal, and environmental wellbeing: AI assets should benefit students, staff, researchers, alumni, industry, the environment, and the broader communities in which RMIT operates according to our mission. (9) Human-centred values: An AI asset must operate in accordance with human rights and respect and preserve the dignity of individuals and groups. (10) Fairness: The benefits of AI should be accessible to all students, staff, researchers and broader communities in which RMIT operates. AI assets should promote fairness, non-discrimination, and diversity. (11) Transparency: RMIT, including all staff and students, are responsible for providing transparency and reasonable disclosure in the use of AI. It should be clear when and how AI is used in learning, teaching, research, business and operational activities. (12) Accountability: RMIT staff responsible for the different phases of the AI asset lifecycle should be identifiable and accountable for the outcomes of the AI assets, and human oversight and control of AI assets should be maintained. (13) Contestability: RMIT will ensure timely and just processes exist to challenge the use, consequences and outcomes of an AI asset when it significantly impacts a person, community, group or the environment. (14) Privacy and Security: RMIT must ensure that any AI asset used by the institution upholds the privacy rights and data protection of AI Consumers and ensure the security of data in keeping with policy, legislation of the region and requirements of a public institution. (15) Reliability: AI assets endorsed by RMIT that are built or under licence agreements should be tested to operate reliably in accordance with their intended purpose. (16) Education and Literacy: AI assets endorsed by RMIT should be accompanied by relevant education on their intended purpose, operations, outcomes and risks to facilitate discussion between educators, researchers, students, and professional staff. (17) The AI Governance Framework will be operationalized through existing governance controls, and where needed, further assessment may be undertaken for the context of AI. (18) Existing risks assessments, indicated below, must be completed before the AI Governance Framework is initiated. These assessments will also determine if the initiative is introducing an AI component(s): (19) Once the above governance assessments are completed and the initiative has been determined to introduce AI will be assessed for entry into the RMIT Information Domain Register. (20) The RMIT Information Domain Register is used to catalogue AI assets, for visibility, oversight and accountability across the lifecycle of AI assets. (21) An initiative which introduces AI will qualify for entry into the RMIT Information Domain Register if RMIT can modify its AI parameters. (22) The modification of parameters includes activities that RMIT may perform to influence the behaviour and outputs of the AI/Machine Learning (ML) model, including model training, hyperparameter tuning, Large Language Model (LLM) fine tuning, modifying LLM system prompts and LLM selection. (23) Initiatives which introduce AI which RMIT is unable to modify the parameters of will be assessed and managed according to the existing Risk Management Policy and related procedures to determine if the risk is within one or more Domain Risk Appetite Statements. (24) Initiatives which introduce AI which are assessed as being able to have their parameters modified by RMIT, will be determined to be AI Assets and added into the RMIT Information Domain Register. (25) The Risk Exposure Tool will be used to assess the level of risk AI assets within the enterprise catalogue present to RMIT. (26) If an AI asset poses a risk, it is categorised as a tier 2 risk within the risk profile of the relevant college, portfolio, or controlled entity as per Risk Management Policy. Some AI assets may be subject to transparency reporting. (27) The purpose of a transparency report is to: (28) Risks associated with AI assets will be reported to governance committees (i.e. Council, Audit and Risk Management Committee, Academic Board etc.) in accordance with the Risk Management Policy. (29) An individual or group assumes the role of AI Sponsor when they initiate, fund, and/or oversee the implementation and adoption of AI assets. AI Sponsors are responsible for: (30) An individual, group or third party assumes the role of AI Developer when they create, design, build, test, and/or deploy an AI asset. AI Developers are responsible: (31) An individual, group or third party assumes the role of AI Operator when they monitor and evaluate or verify an AI asset and its outcomes. AI Operators are responsible for: (32) An individual assumes the role of an AI Consumer when they interact, use or benefit from the AI assets or AI generated outputs. AI Consumers are responsible for: (33) An individual member of the Data and AI Risk Management Stewards Working Group (AISG) is assigned to act in the role of AI Assurer and must be independent from that of any AI Sponsors, AI Developers and AI Operators. They are responsible for:Responsible Artificial Intelligence (AI) Procedure
Section 1 - Context
Section 2 - Authority
Section 3 - Scope
Section 4 - Procedure
Principles
Existing Governance Controls
AI Governance Framework
Enterprise Catalogue
Transparency Monitoring
Risk-based Oversight and Reporting
Roles and Responsibilities
Top of PageSection 5 - Definitions
Artificial Intelligence (AI)
The application of advanced computational methods such as machine learning, deep learning and neural networks to produce outputs including large language models which can perform complex, human-like tasks. Refer to the definition for AI.
AI asset
A technology, application, software or model that performs AI automated, machine learning, generative, or combined actions for which RMIT Group can modify the parameters for specific processes and functions to achieve a defined purpose.
Modifying the parameters of the AI
Include activities that RMIT may perform to influence the behaviour and outputs of the AI/Machine Learning (ML) model, including model training, hyperparameter tuning, Large Language Model (LLM) fine tuning, modifying LLM system prompts and LLM selection.
View Document
This is the current version of this document. You can provide feedback on this policy document by navigating to the Feedback tab.
(Note: Commonly defined terms are in the RMIT Policy Glossary. Any defined terms below are specific to this policy).