1. Purpose

These guidelines establish guardrails for responsible adoption of AI tools at Swinburne University. They align with ethical principles, legal requirements, and institutional policies to ensure that AI use supports educational, research, and administrative activities without compromising security, privacy, or fairness.  

These guidelines should be read in conjunction with the AI Usage Procedures and the Data Classification Framework [staff login required].

2. Scope

These guidelines apply to all staff, students, researchers, contractors and affiliates (‘users’) using AI tools in any capacity related to Swinburne operations and commercial/academic endeavours, including use by students as part of their learning. 

Users must ensure that AI usage within these scenarios complies with university policies, regulations and relevant legislation and does not introduce security, privacy, ethical, integrity or legal risks. To the extent of any inconsistency between this procedure and university regulations, the university regulations shall take precedence and prevail.

3. Conditions of use

It is a condition of access to AI tools that users agree to comply with all university policies relating to the use of information technology systems and data governance, including these guidelines as well as:

4. Checklist for use of AI tools

Individuals using AI tools at Swinburne University must be aware of the following guidelines to ensure compliance:

4.1. Data governance and privacy

  • Check that the proposed use of data is permitted under the Privacy Collection Notice.
  • Classify data that will be used in the tool with the Data Classification Framework and follow the ‘Use of AI tools’ outlined in the AI Usage Procedures when entering data into an AI service.
  • Ensure the proposed use of the tool does not breach commercial-in-confidence, copyright or Swinburne Library's Online Resources Terms of Use or other commercial licence terms.
  • Ensure the proposed use complies with Swinburne’s Indigenous Data Sovereignty Guidelines.
     

4.2. Validation

  • Check the AI tool’s outputs for accuracy, bias, and ethical considerations.
     

4.3. Transparency and user communication

  • Inform end-users about AI-enabled decisions, ensure AI-generated content is transparently labelled and disclosed by acknowledging this with the name of the source and the data the AI was accessed, and provide clear channels for stakeholders to challenge AI system use or outcomes. For example, include the email contact details for end-users to query decisions that have been made about them.  
  • For students, this means using and acknowledging generative AI in accordance with academic integrity requirements.
     

4.4 Compliance and record-keeping

  • Maintain records of all assessments, decisions, and actions related to the AI tool, ensuring documentation is comprehensive and accessible for audits and reviews.

5. Checklist for implementation of AI tools

Individuals seeking to implement or develop AI tools at Swinburne University must complete the following checklist to ensure compliance with these guidelines:

5.1. Initial assessment 

  • Define the purpose and scope of the AI tool. 
  • Identify the stakeholders and their needs. 
  • Determine if the AI tool aligns with the university's strategic goals and values. 
  • Check compliance with regulator’s AI good practice.
     

5.2. Risk and impact assessment 

  • Ensure procurement of any new tools is conducted in alignment with the University’s Procurement Procedure and undergoes appropriate risk assessments to identify potential harms and mitigations and complies with data sovereignty requirements, including data storage, in line with these requirements. Depending on the use case and the data being used, a Privacy Impact Assessment (PIA), Third Party Risk Assessment (TPRA) or Security Risk Assessment (SRA) may need to be conducted. The Privacy Impact Assessment form is available on the Privacy webpage. It is reviewed by the Legal, Risk and Compliance Team. The risk assessment forms are available in the Staff Service Portal in the Cybersecurity section. These are reviewed by the Cybersecurity Team.
  • Ensure release of AI functions within existing software applications undergoes appropriate risk assessments.  
  • Evaluate the AI tool's impact on stakeholders, including potential biases, ethical concerns, and overall fairness. 
  • Document the risk and impact assessment findings that will be provided by the above privacy and security assessments.
     

5.3. Data governance and privacy 

  • Obtain approval from Data Stewards to use Swinburne data with the tool by submitting a request to access data from the Data and AI Team wiki. 
  • Communicate clearly to end users whether the tool has been endorsed for use with sensitive, restricted or personally identifiable information data, based on the outcomes of the above privacy and security assessments and the data governance request. 
  • Verify the AI tool’s data governance measures, including data quality, provenance, and cybersecurity practices. Use data validation, cleansing, and standardisation processes to maintain data quality.
  • Document data governance and privacy measures, including which user groups or roles have been endorsed to access and use the tool, and any measures that protect personal information. Document the data sources and transformations. 
     

5.4. Testing and validation 

  • Develop clear and measurable acceptance criteria for the AI tool, including metrics for accuracy, response time, and throughput; availability; and fairness and transparency. For example, “the AI tool should provide explanations for its decisions in 95% of cases”.
  • Test the AI tool against acceptance criteria to ensure it meets performance, reliability, and ethical standards. 
  • Ensure that testing includes scenarios to verify the tool’s adherence to ethical use and responsible output. This includes confirming that the tool applies consistent criteria and does not introduce bias into the assessment process based on gender, race, age, disability, or any other protected characteristic. Document testing and validation results. 
     

5.5. Release approval

  • All AI tools developed in-house by Swinburne staff or students and intended for implementation at the university must be endorsed by the AI Working Group prior to release. 
  • Each AI-based initiative must be classified as High Risk or Low Risk. Scope, definition, criteria, safeguards, and treatment need to be provided as part of the release approval process.
     

5.6. Implementation and monitoring 

  • Assign accountability for the AI tool's performance and oversight. 
  • Implement the AI tool in a Swinburne environment managed by IT, ensuring staff are trained for its use. 
  • Establish monitoring mechanisms to continuously evaluate the AI tool’s performance and adherence to standards. This may include automated monitoring and user feedback. 
  • Regularly review and update the AI tool based on monitoring feedback.
     

5.7. Transparency and user communication 

  • Document transparency and communication measures.

6. Definitions

Term Definition
Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to computer systems or machines designed by humans to perform tasks that typically require human intelligence, such as perceiving, reasoning, learning, decision-making, and problem-solving. AI systems analyse and interpret data, adapt to new information, and act autonomously to achieve specific goals. This includes specialised domains like machine learning, which enables systems to learn from data without explicit programming; natural language processing, which allows understanding and generation of human language; computer vision, for interpreting visual information; and generative AI, which creates new content based on learned patterns. AI technologies simulate cognitive functions to enhance or automate complex tasks across various applications, from virtual assistants to autonomous vehicles.

Digital NSW. (2024). A common understanding: simplified AI definitions from leading standards. NSW State Government. 

Russell, S., & Norvig, P. (2021). Artificial intelligence: A Modern approach (4th ed.). Pearson.

What is AI (artificial intelligence)? (2024, April 3).  McKinsey Blog.

Artificial Intelligence (AI) tool A software application or system that utilises artificial intelligence techniques to perform tasks that typically require human intelligence. This includes tasks such as learning, reasoning, problem-solving, perception, and language understanding.
Commercial-in-confidence

Commercially sensitive, highly confidential data and information that, if breached owing to accidental or malicious activity, could reasonably be expected to cause serious harm to SUT, another organisation or an individual if released publicly, including: 

• Confidential out-of-court settlements, records affecting national security, protected disclosures, security vulnerabilities and commercially significant research results. 

• Information restricted as a condition of ethics approval. 

• Information that may be commercially valuable (patents, IP, commercialisable information).

Data classification A process for assessing data sensitivity, measured by the adverse business impact a breach of the data would have upon the University.
Enterprise data protection A set of controls and commitments designed to protect customer data within a service.
Internal A data classification whose potential impact on Swinburne if lost or breached would be disruptive. It includes internal reports, documents and files that are not commercially sensitive and do not contain personal information.
Microsoft 365 Copilot An AI-powered tool that assists with tasks in Microsoft 365 apps like Word, Excel, and Outlook. It provides enterprise data protection to users who log in with their Swinburne account. Users require a paid licence from Swinburne University to access this tool.  As Microsoft naming conventions change frequently, this is referred to as ‘Copilot’.
Microsoft 365 Copilot Chat A chat-based AI tool within Microsoft 365 that uses the web to provide responses. It provides enterprise data protection to users who log in with their Swinburne account. As Microsoft naming conventions frequently change, this is referred to as ‘Copilot Chat’.
Public A data classification whose potential impact on Swinburne if lost or breached would be minor or positive. It includes education material created for public use, course schedules and catalogues, campus brochures and maps, annual reports, and published journal or research articles.
Restricted A data classification whose potential impact on Swinburne if lost or breached would be significant. It includes commercial and operational data; non-sensitive personal information; marketing and operational data which supports competitive advantage or service delivery; and management data.
Sensitive A data classification whose potential impact on Swinburne if lost or breached would be critical or higher. It includes commercially sensitive and highly confidential data; sensitive personal information; sensitive financial data; data relating to University staffing and staff personal and employment records, and data needed to process financial payments.

Explore all policies and related resources

To find out about our other policies, regulations and resources, head to the main policies section.

Browse all policies