Responsible Use of AI

Purpose

Artificial Intelligence (AI) technologies have benefits, can create efficiencies and improve access to the services provided by Archistar and its customers. AI also has limitations, risks and potential for unintended consequences. Archistar uses AI to take advantage of new AI technology while addressing and mitigating any negative impacts.

Overview

These AI principles describe Archistar’s commitment to:

  • Developing and using AI systems responsibly and transparently
  • Governing with clear values and ethics
  • Safeguarding privacy, fairness and human rights
  • Ensuring security

Our AI principles establish a clear vision for how to approach AI responsibly. They are closely aligned with the AI principles of the Australian government and are informed by other jurisdictions, such as Canada (including Ontario and British Columbia), the United States, the European Union and New Zealand.

As AI continues to evolve, we will update this Policy.
We publish this Policy on our website.

Application

This Policy is to ensure that AI is used appropriately by Archistar.
Archistar’s products include AI features and tools for use by Archistar’s customers. Although we cannot control all actions of our customers, this Policy also helps to ensure that the AI features and tools in our products are and can be used appropriately and responsibly by our customers.

Guiding Principles

Our guiding principles in respect of the use of AI are the following:

  • We will continue to lead in the responsible use of AI to advance property planning, development and design processes while defending against potential misuse of this transformational technology.
  • Our use of AI will be rigorously tested to avoid bias and disparate impact, and will be clearly explainable to our users and those impacted by use.
  • We recognise both the potential benefits and risks associated with AI, particularly Gen AI, and we strive to balance benefits with risk management and guardrails.
  • We regularly update what we do and how we think about AI.
  • Human oversight is key.

Our detailed principles regarding the responsible use of AI are set out below. Please consider these AI principles as a set rather than individually, as there is intended overlap across each principle.

Principle 1: Transparency

AI systems often operate as “black boxes,” with their internal workings hidden from users. Transparency allows users to understand how an AI system makes a particular decision. This increases trust and enables scrutiny. Transparency also assists in ensuring compliance with laws and regulations and so that users can justify AI-informed outcomes.

  • We provide clear information on why, how and when an AI system is used.
  • In doing so, we explain the purpose, benefits, and a description of the system.
  • However, we do not disclose details if doing so exposes trade secrets or could lead to security breaches.
  • We have review processes to enable correction of data.
  • We have a robust and transparent review mechanism in place, so anyone significantly impacted by an AI system can ask questions about it.

Principle 2: Fairness

AI systems are trained on data, and if that data contains biases, the AI system can perpetuate or even amplify those biases. This can lead to inappropriate or unfair outcomes. As creators and users of AI, we have an ethical responsibility to ensure that our technologies are used in a way that respects the rights and dignity of all individuals.
We design, use and evaluate our AI systems to ensure fairness and equity so that no individuals or communities face illegal or inappropriate discrimination or harm.
We recognise bias in data inputs and outputs, particularly when using data collected in an environment where there may have been systemic discrimination.
We review and assess our AI systems and their outputs throughout their lifecycle.

Principle 3: Reliability

AI systems, if not designed & implemented appropriately and regularly reviewed, may not always be reliable and accurate.
An AI system can learn and change over time, altering its performance. For example, unwanted patterns in a system’s training data may become amplified and change how the system operates.
Proactive human oversight allows us to identify and address issues in the AI system as needed. This prevents the system from producing results that deviate from its original design and intended purpose.

  • We monitor our AI systems so they continue to meet their intended purpose and produce accurate outputs.
  • We use high-quality data for input.
  • We monitor and assess outputs for accuracy and ensure unwanted biases or patterns have not inadvertently crept in over time.
  • We identify and address problems as they emerge after deployment.
  • We strive for continuous improvement and use the latest AI techniques and tools where commercially appropriate.
  • We seek to deploy parallel human testing and benchmarking along with copilot deployment to ensure reliability of output .

Where our products employ AI technologies to streamline and enhance workflows, the AI is not intended to replace the professional judgment, review, or advice of qualified and experienced humans. All outputs generated by AI systems should be evaluated by a human to ensure accuracy, compliance, and alignment with client needs.

Principle 4: Safety, Security and Privacy

AI systems rely on large amounts of data, which could include personal information. This data, if not protected, could be subject to unauthorised access and use. This may include information such as data on the network that may be discovered or data that users provide (whether public or private.) Data breaches can cause significant harm to individuals and organisations and could erode public trust.

  • We have safeguards in place to protect data and AI systems.
  • We implement ongoing review, testing and monitoring of safety and security.
  • Our risk management processes continue throughout the AI system lifecycle, including decommissioning.
  • We apply privacy and security safeguards as required by law and good business practice.
  • We analyse and strive to understand the risks related to the AI system, including community harms, and identifying the best way to manage them
  • We use and enable our customers to use AI systems without compromising data, customer information and customer infrastructure.
  • We do not use our customers’ data to train AI, unless our customer consents.
  • We do not provide our customers’ data to third parties to train their AI, unless our customer consents.

Principle 5: Public Benefit

AI systems can’t think or feel like humans. AI systems should be used to help people and build trust. This means making sure AI is designed to be responsible, respect rights and improve outcomes.

  • When designing and using AI, we consider customer and public benefits.
  • When we use an AI system, we consider whether it is an appropriate solution for the problem, whether it delivers the best outcomes for the community, and those who may be affected by it.
  • We aim to deliver clear benefits and insights for our customers and the public.
  • We consider alternative solutions before deciding on the use of AI.
  • We recognise the needs of individuals and communities, including Indigenous peoples.
  • We do not use AI for, or enable our customers to use AI for, inappropriate or illegal activities, such as to invade privacy, realtime facial recognition, generating deepfakes, social engineering attacks, misinformation or fake news, hacking, blackmail or to discriminate against certain groups.

Principle 6: Accountability

Effective governance, consistent monitoring, and thorough oversight are key to achieving system objectives and building trust.
AI systems can make mistakes, operate outside of their intended parameters, or use outdated or obsolete information if they are not properly supervised by humans.
When mistakes happen, accountability mechanisms can help identify what went wrong, allowing for improvement, AI learning, and verified information. Many jurisdictions are introducing laws that require accountability in AI systems.

  • We have established clear rules and responsibilities for creating, using, and managing AI systems so they work properly, are unbiased, and have proper oversight.
  • We have established clear roles and responsibility for AI systems’ outcomes and impacts.
  • We monitor and evaluate our AI systems and the AI systems we use to continuously to detect and address issues.
  • We have human involvement and oversight when using AI systems.
  • We enable our customers to have human involvement and oversight when using our products.

Principle 7: Education and Awareness

AI can seem complex and intimidating to some people. Education can help users make informed decisions about when and how to use AI, and can help demystify AI, making it more accessible and understandable.

  • We have ongoing education programs for all our employees regarding AI, the responsible use of AI, the capabilities and limitations of AI, and what is appropriate and what is not.
  • We educate our customers regarding AI and how to best use our products in a responsible manner.