1. Purpose

The Artificial Intelligence (AI) usage procedures establish the criteria for the responsible use of AI tools at Swinburne University of Technology. They align with ethical principles, legal requirements, and institutional policies to ensure that AI use supports educational, research, and administrative activities without compromising security, privacy, or fairness.

2. Scope

These procedures apply to all staff, students, researchers, contractors and affiliates (‘users’) using AI tools in any capacity related to Swinburne operations and commercial/academic endeavours, including use by students as part of their learning.  AI tools may be used in various scenarios, including but not limited to:

  • Teaching and learning: Enhancing student engagement and success through AI-assisted tutoring, feedback, and content creation, including the use of AI by students as part of their learning.
  • Research: Searching and summarising literature, testing hypotheses, and editing, and as a tool in the innovation process, i.e. in idea generation and the identification of technical solutions. 
  • Administrative tasks: Automating routine processes, drafting communications, and improving decision-making.
  • Digital solution development: Assisting with tasks such as software engineering, data pipeline creation, AI model development, testing, and automation.
  • Marketing and communications: Assisting in content creation, campaign optimisation, and maintaining brand consistency and factual accuracy.
  • Operational tasks: Supporting data-driven decision-making and predictive analytics in various university functions.
     

Users must ensure that AI usage within these scenarios complies with university policies, regulations and relevant legislation and does not introduce security, privacy, ethical, integrity or legal risks. To the extent of any inconsistency between this procedure and university regulations, the university regulations shall take precedence and prevail.

3. Ethical and responsible AI use

All users should use AI tools to support productivity, learning, research, and innovation within ethical and legal boundaries. 

Integrity and purposeful use

  • Users must ensure AI-generated content complies with Swinburne’s intellectual property and academic integrity policies and regulations.
  • Users must respect intellectual property laws, including Indigenous cultural and intellectual property considerations, and creators’ moral rights of attribution and integrity.
  • Users must ensure AI use is appropriately referenced in accordance with university referencing conventions.
     

Transparency and accountability 

  • Users may use AI to assist decision-making but not as the sole authority in critical decisions, such as student grading, admissions, and hiring.
     

Fairness and accuracy

  • Users must assess AI-generated outputs for bias, particularly in hiring, admissions, research, and policymaking.
  • Users must apply human oversight when using AI tools for decision-making, ensuring AI recommendations are reviewed before implementation.
     

Privacy and data protection

  • Users must use AI in accordance with Swinburne’s Data Classification and Privacy Frameworks.
  • Users must ensure compliance with Swinburne’s data security, ethics, integrity and sovereignty requirements when using AI as part of an external collaboration. AI partnerships should be reviewed for risks related to intellectual property, data sharing, and regulatory compliance and as a result a Privacy Impact Assessment (PIA), Third Party Risk Assessment (TPRA) and Security Risk Assessment (SRA) need to be conducted.
  • Users must ensure compliance with Australian Research Council and National Health and Medical Research Council, funder and publisher policies on the use of AI in grant applications, peer review and authorship.

4. Conditions of use

It is a condition of access to AI tools that users agree to comply with all university policies relating to the use of information technology systems and data governance, including these guidelines as well as:

4.1 Data Classification

All AI usage must align with Swinburne’s Data Classification and Privacy Frameworks:

  • Public Data: Can be shared into any AI system, provided such use is in line with other applicable laws, frameworks, and policies for such uses (e.g. copyright)
  • Internal Data: Only use with enterprise AI tools.
  • Sensitive or Restricted Data: Must not be entered into any AI system which does not have enterprise data protection. This includes, but is not limited to:
    • personally Identifiable Information (PII), as defined pursuant to section 3 of the Privacy & Data Protection Act 2014 (Vic)
    • health information, as defined pursuant to section 3 of the Health Records Act (Vic)
    • student records
    • staff details
    • research data with confidentiality requirements
    • commercial or proprietary university data.
       

A list of approved tools and allowed data classifications will be maintained on the intranet.

4.2 Use of AI tools 

Enterprise AI Services (e.g. Microsoft 365 Copilot, Microsoft 365 Copilot Chat)

These services offer enterprise-level security and compliance, do not retain data for training external models, and meet the university’s data sovereignty requirements. They can be used with all classifications of data sensitivity because they provide enterprise data protection. Users must log in with their Swinburne account.

AI functions in Swinburne applications

These AI features assist users of software applications. It is recommended users exercise caution with use of all classifications of data sensitivity with these tools, particularly Restricted or Sensitive data.

Non-Enterprise AI Services

Free AI tools are accessible through the internet. However, these tools risk non-compliance with data sovereignty policies because they may retain and use input data for model training, lack enterprise security protections, and store data outside Australia. Users must not use Swinburne data with these tools. Users who log in to Microsoft 365 Copilot Chat without using their Swinburne account must not use Swinburne data, as this use is not covered by enterprise data protection.

5. Compliance and enforcement

Failure to comply with these procedures may result in disciplinary action under Swinburne’s IT acceptable use guidelines. Students who intentionally misuse AI for personal gain may be subject to university sanctions in accordance with university regulations. Users are responsible for understanding the risks associated with AI and adhering to ethical AI usage practices.

Staff and students are encouraged to report suspected misuse of AI. Suspected AI misuse can be reported to the Privacy Officer in the Legal Risk and Compliance office at infoprivacy@swinburne.edu.au.

6. Definitions

Term Definition
Artificial Intelligence (AI)

Artificial Intelligence (AI) refers to computer systems or machines designed by humans to perform tasks that typically require human intelligence, such as perceiving, reasoning, learning, decision-making, and problem-solving. AI systems analyse and interpret data, adapt to new information, and act autonomously to achieve specific goals. This includes specialised domains like machine learning, which enables systems to learn from data without explicit programming; natural language processing, which allows understanding and generation of human language; computer vision, for interpreting visual information; and generative AI, which creates new content based on learned patterns. AI technologies simulate cognitive functions to enhance or automate complex tasks across various applications, from virtual assistants to autonomous vehicles.

Digital NSW. (2024). A common understanding: simplified AI definitions from leading standards. NSW State Government. 

Russell, S., & Norvig, P. (2021). Artificial intelligence: A Modern approach (4th ed.). Pearson.

What is AI (artificial intelligence)? (2024, April 3).  McKinsey Blog.

Artificial Intelligence (AI) tool A software application or system that utilises artificial intelligence techniques to perform tasks that typically require human intelligence. This includes tasks such as learning, reasoning, problem-solving, perception, and language understanding.
Commercial-in-confidence

Commercially sensitive, highly confidential data and information that, if breached owing to accidental or malicious activity, could reasonably be expected to cause serious harm to SUT, another organisation or an individual if released publicly, including: 

• Confidential out-of-court settlements, records affecting national security, protected disclosures, security vulnerabilities and commercially significant research results. 

• Information restricted as a condition of ethics approval. 

• Information that may be commercially valuable (patents, IP, commercialisable information).

Data classification A process for assessing data sensitivity, measured by the adverse business impact a breach of the data would have upon the University.
Enterprise data protection A set of controls and commitments designed to protect customer data within a service.
Internal A data classification whose potential impact on Swinburne if lost or breached would be disruptive. It includes internal reports, documents and files that are not commercially sensitive and do not contain personal information.
Microsoft 365 Copilot An AI-powered tool that assists with tasks in Microsoft 365 apps like Word, Excel, and Outlook. It provides enterprise data protection to users who log in with their Swinburne account. Users require a paid licence from Swinburne University to access this tool.  As Microsoft naming conventions change frequently, this is referred to as ‘Copilot’.
Microsoft 365 Copilot Chat A chat-based AI tool within Microsoft 365 that uses the web to provide responses. It provides enterprise data protection to users who log in with their Swinburne account. As Microsoft naming conventions frequently change, this is referred to as ‘Copilot Chat’.
Public A data classification whose potential impact on Swinburne if lost or breached would be minor or positive. It includes education material created for public use, course schedules and catalogues, campus brochures and maps, annual reports, and published journal or research articles.
Restricted A data classification whose potential impact on Swinburne if lost or breached would be significant. It includes commercial and operational data; non-sensitive personal information; marketing and operational data which supports competitive advantage or service delivery; and management data.
Sensitive A data classification whose potential impact on Swinburne if lost or breached would be critical or higher. It includes commercially sensitive and highly confidential data; sensitive personal information; sensitive financial data; data relating to University staffing and staff personal and employment records, and data needed to process financial payments.

Explore all policies and related resources

To find out about our other policies, regulations and resources, head to the main policies section.

Browse all policies