Supporting Responsible and Effective Use of Artificial Intelligence (AI)
Artificial Intelligence (AI) is shaping many aspects of society, including education. As AI tools continue to evolve, they offer opportunities to support teaching, learning and professional practice. This also raises important considerations related to academic integrity, equity, privacy and ethical use.
These guidelines support educators in making informed, professional decisions about the use of AI tools in YRDSB schools and classrooms.
York Region District School Board (YRDSB) is committed to fostering innovative and inclusive learning environments that prepare our students for a rapidly evolving world. As Artificial Intelligence (AI) continues to reshape various aspects of society, including education, we are providing these guidelines to support educators in harnessing the potential of AI tools.
- These AI guidelines directly align with the District Action Plan’s strategic objectives: Student Achievement: By embracing AI, we aim to deliver responsive instruction and assessment, provide high-quality instruction and support sustainable professional development for staff. AI tools can assist educators in differentiating curricula, creating engaging learning content and personalizing support.
- Health and Well-Being: These guidelines promote AI integration that contributes to a safe and supportive learning environment for everyone.
- Human Rights and Inclusive Education: These guidelines emphasize advancing equity for students and staff facing systemic barriers, affirming diverse identities and addressing bias.
The purpose of these guidelines is to:
- Provide comprehensive support for educators as they integrate AI tools into their professional practice.
- Clarify the appropriate use of AI to enhance pedagogical strategies and administrative tasks.
- Outline considerations related to academic integrity, data privacy and ethical implications.
Who these guidelines apply to: These guidelines apply to all students, staff and third parties interacting with AI technologies within YRDSB. It covers all AI systems, approved by the board, used for education, administration and operations.
The YRDSB AI Framework supports educators by:
- providing a shared language and structure for AI use
- reinforcing professional judgment and ethical responsibility
- supporting consistency across schools and classrooms
- aligning AI use with board policies and instructional priorities
YRDSB Artificial Intelligence Framework
YRDSB’s AI Framework is grounded in three interconnected areas:
- Knowledge
- Critical Thinking
- Accountability.
Together, these guide educators in making informed, ethical, and professional decisions about the use of AI in teaching, learning, assessment and professional practice.
AI sits within a broader ecosystem that includes students, educators and families. Educators play a key role in modelling responsible AI use and supporting students in developing AI literacy, critical thinking and ethical decision-making skills.
It is divided into three sections:
1. Knowledge - What is AI, and how can we use it to support our learning?
2. Critical Thinking - AI literacy and critically evaluating AI
3. Accountability - Academic integrity, privacy and security
Here is an overview of AI and generative AI, what they can and cannot do, and how students can use these tools thoughtfully and responsibly in their learning:
Understanding the fundamentals of AI is crucial for its effective and responsible integration into the educational environment.
What is AI?
Artificial Intelligence (AI) is technology that can mimic intelligent human behaviour, such as reasoning, learning and problem-solving. It encompasses a broad field focused on developing intelligent machines capable of performing tasks typically requiring human intelligence.
AI systems can be understood in several broad categories:
Reactive AI
Responds to specific inputs without memory or learning.
Examples include voice assistants and basic help tools.
Predictive AI
Uses data patterns to anticipate likely outcomes.
Examples include predictive text, navigation tools and grammar suggestions.
Generative AI
Creates new content based on user prompts.
Examples include Google Gemini, Microsoft Copilot, Canva AI and MagicSchool AI.
Agentic AI
Performs tasks autonomously or semi-autonomously across systems.
Examples include AI tools that plan, search or execute multi-step actions.
More About Generative AI (GenAI)
GenAI generates original content from user prompts. Content could include text, images, videos, audio, and software code. Tools like Copilot and Gemini are examples of GenAI. GenAI uses large language models (LLMs) trained on vast amounts of data to predict and generate new content.
Generative AI tools operate through a process that includes:
- Training on large datasets to identify patterns.
- Pattern recognition to learn relationships in data.
- Prompting, where users provide instructions or questions.
- Generation of new content based on probability.
- Iteration, where outputs improve through refinement and feedback.
Because these systems rely on patterns rather than understanding, educators and students must critically evaluate outputs for accuracy, bias and relevance.
Human and Social Well-Being
A careful approach to AI use supports learning and working while also protecting privacy and avoiding biases and harmful consequences.
Educators are responsible for:
- Engaging with AI in a way that respects the dignity and rights of all individuals, preventing discrimination or inequities based on protected grounds.
- Being aware that GenAI tools embedded in social media or GenAI content can lead to harm, including privacy risks and social-emotional consequences.
- Referring to Board Policy for Caring and Safe Schools Policy and Procedure #668.0 and other Board policies.
Clarifying if, when, and how AI tools will be used in their classrooms. Appropriate AI use should align with specific activity parameters and objectives.
For Teacher Support, AI can assist with:
- Assessment Considerations: Generating concepts for learning, creating questions and adapting texts for accessibility. Teachers remain responsible for assessment and evaluation for, as and of learning.
- Content Development and Differentiation: Differentiating learning, lesson planning, generating diagrams and charts, and customizing independent practice. Please see the elementary and secondary planning requirements for further information.
- Continuous Professional Development: Recommending teaching strategies, personalizing professional development, suggesting collaborative projects and offering simulation-based training scenarios.
- Research and Resource Compilation: Recommending books or articles and updating on teaching techniques and research with criticality.
- Text-Selection: When considering the use of an AI tool in the classroom, refer to the YRDSB Text Selection Tool, as outlined in Board procedure #NP370.0.
For Student Learning, AI can aid in:
- Creativity: Sparking creativity across subjects like writing, visual arts and music.
- Collaboration: Partnering in group projects by contributing concepts, research support and identifying relationships between information.
- Communication: Offering real-time translation, personalized language exercises and interactive dialogue simulations.
- Content Creation and Enhancement: Generating personalized study materials, summaries, quizzes and visual aids, and helping organize or review content.
- Tutoring: Providing personalized one-to-one learning support.
General Responsible Use - "EVERY" Method:
AI is a helpful tool, but it's not perfect. It can hallucinate, create misinformation or give you an incomplete answer. You are always responsible for the work you submit.
Use the EVERY method to review AI-generated content:
E – Evaluate
Check that your prompt is well-worded. Does the first answer meet your needs?
V – Verify
Check facts, figures, quotes and data using reliable sources. Watch out for mistakes, bias and misinformation.
E – Edit
Refine your prompt and ask follow-up questions to improve the AI's output.
R – Revise
Adjust the results to match your needs, style, and tone. AI gives you a starting point, not the final answer.
Y – You
Are responsible for everything you create with AI. Always be honest about how you are using it
To print, download The EVERY Method poster (PDF version).
Promoting AI literacy is central to addressing the risks and developing critical thinking skills for both staff and students.
What Is AI Literacy?
AI literacy refers to the ability to understand, evaluate and responsibly use AI technologies.
AI literacy includes:
- understanding AI concepts and applications
- recognizing capabilities and limitations
- evaluating outputs critically
- using AI ethically and responsibly
- adapting to evolving technologies
- communicating and collaborating effectively
Teaching AI literacy supports student learning, creativity and preparation for future pathways.
Awareness of Bias
GenAI tools are trained on large datasets that include human-created content. As a result, these tools can reflect, amplify or reproduce existing biases present in the data, including cultural, social, historical or systemic biases.
Bias may appear in AI-generated outputs through language, examples, perspectives represented or omitted, and assumptions embedded in responses. These biases may not be intentional, but they can still affect accuracy, fairness and inclusivity.
Educators must continually self-reflect, check and mitigate their own biases, as well as those inherent in any tool or resource. This includes critically reviewing AI-generated content, considering whose voices and perspectives are represented or missing, and supporting students in developing the skills to question and evaluate AI outputs through an equity and inclusion lens.
AI Hallucinations/Deepfakes
An AI hallucination is the phenomenon where a large language model (LLM) generates false, fabricated or misleading information that is presented confidently as if it were factual. Hallucinations may include incorrect facts, invented sources, inaccurate quotations or misleading explanations.
A deepfake is AI-generated synthetic media (video, audio or image) that deceptively shows a person doing or saying something they did not do or say. Deepfakes can be difficult to identify and may be used to misinform, manipulate, or misrepresent individuals or events.
Educators and students should approach AI-generated content with caution and apply critical verification practices, including cross-checking information with reliable sources. Teaching students how to recognize the possibility of hallucinations and manipulated media helps them:
- build media literacy
- use technology responsibly
- make informed decisions in digital spaces
Evaluating AI Outputs Using a Human-Centred Approach
Staff and students must always review and critically assess outputs from AI tools before submission or dissemination. AI-generated content should not be relied upon as authoritative or accurate without human review.
YRDSB promotes a human-initiated, human-led approach to AI use. Educators are encouraged to begin with professional judgment and instructional intent, using AI to support—not replace—thinking and decision-making.
AI-generated outputs should be critically reviewed and refined. Educators should model reflective and ethical AI use for students. AI complements human capability; it does not replace educator expertise.
Critical evaluation includes verifying facts, figures, quotations and data using reliable sources, as well as considering:
- the accuracy and completeness of information
- potential bias or missing perspectives
- appropriateness for the instructional context
- alignment with learning goals and curriculum expectations
This process helps mitigate risks such as misinformation, over-reliance on AI-generated content and inequitable outcomes.
Educator Decision-Making for AI Tool Use
When considering the use of AI tools, educators are encouraged to exercise professional judgment and reflect on the purpose, impact and risks.
Key considerations include:
- Does the task involve sensitive or confidential information?
- Is the AI tool board-approved and assessed through YRDSB’s internal privacy and security processes?
- Does the tool enhance efficiency without undermining professional judgment or pedagogy?
- What level of oversight, verification and refinement is required?
If an AI tool does not meaningfully support professional practice or introduces unnecessary risk, it may not be appropriate for use.
Responsible use of AI tools involves upholding established policies, ethical standards and legal requirements.
Academic Integrity and Professional Responsibility
Please refer to Equitable Assessment, Evaluation and Communication of Student Learning and Achievement (Policy #305.0).
Educators play a key role in:
- modelling ethical AI use
- establishing clear expectations for students
- maintaining transparency in instructional and professional contexts
- ensuring assessment practices accurately reflect student learning
Users must be truthful in crediting sources and tools, and presenting their own work for evaluation. Claiming AI-generated content as one's own intellectual property violates academic integrity.
There must be clear communication and understanding of expectations between students and educators when AI is being used.
Educators remain accountable for all instructional decisions and outputs, including those supported by AI tools.
Responding to Academic Dishonesty
When academic dishonesty is confirmed, educators follow established board procedures that focus on learning, support and accountability. In accordance with Policy and Procedure #305.2, educators shall:
- communicate findings with the student;
- provide the student an opportunity to resubmit the task (in whole, or in part) or complete an alternative task;
- provide support in response to student need, taking into consideration the grade level of the student, the maturity of the student, the number and frequency of incidents, and the individual circumstances of the student;
- ensure the mark assigned for the resubmitted task reflects the quality and proficiency demonstrated in relation to the overall curriculum expectations; marks cannot be
- deducted as a punishment for academic dishonesty, but may be considered in the evaluation of learning skills and work habits; and
- communicate with the family/caregiver and school administration.
This approach reinforces academic integrity while supporting student growth and learning.
Assessment and Evaluation Considerations
In conjunction with Equitable Assessment, Evaluation and Communication of Student Learning and Achievement (Policy #305.0), artificial intelligence can help educators with planning, instruction and assessment. The following points need to be considered:
- Professional judgment must remain central and irreplaceable in assessing and evaluating student learning, whether it is Assessment for, as or of Learning, of curricular expectations and/or learning skills and work habits.
- Student work should not be entered into AI for assessment and evaluation.
- Equitable assessment is flexible. Not all students need to demonstrate learning in the same way. Educators should focus on observations, conversations, and products to assess student learning.
- Designing authentic and relevant assessment and evaluation opportunities mitigates the effects of students using AI to generate their work.
Privacy, Security and Data Protection
Educators are responsible for ensuring that AI use aligns with board expectations for privacy and security.
This includes:
- using only board-approved AI tools
- removing personal or identifiable information
- understanding how data is handled and protected
- modelling safe digital practices for students
When using generative AI systems, educators must:
- never use personal identifiable information (PII), including name, grade, address, birthdate, demographics, student number, voice or face
- limit the amount of data shared and exclude identifiable information (e.g., names, addresses, birthdays, class lists, marks) from prompts
- not use any personal, private, confidential or sensitive board data, information, or content as part of prompts
- use only Board-approved tools, including Microsoft Copilot and Google Gemini
- use approved enterprise tools that meet YRDSB privacy and security standards and do not use entered data to train AI models.
Security and Cybersecurity
- AI tools must be used in ways that respect privacy and security.
- Only digital tools, including AI tools, that have undergone YRDSB’s internal privacy and security assessment and received approval may be used.
- AI systems must be evaluated for compliance with relevant laws, regulations and standards related to data protection, privacy and students' online safety.
These guidelines will be reviewed regularly to ensure they remain relevant with technological changes and comply with laws, policies and regulations.
Educator AI Checklist
Use this checklist to guide responsible, ethical, and effective use of AI tools at school.
🖶 To print this checklist, expand the accordion and right-click "Print..."
1. Knowledge
☐ Understand what AI is, how it functions and use only Board-approved tools.
☐ Be transparent with students and families about how and when you use AI in your classroom.
☐ Use AI to support, not replace, teaching, learning and administrative tasks.
2. Critical Thinking
☐ Model and teach critical thinking when using AI outputs.
☐ Always review, verify and revise AI-generated content.
☐ Address bias and ensure AI use promotes equity and inclusion.
3. Accountability
☐ Uphold academic integrity by ensuring students credit sources and do not submit AI-generated work as their own.
☐ Be transparent about the use of AI in learning and assessment.
☐ Follow YRDSB policies for assessment, evaluation and communication.
☐ Guide students in responsible and ethical use of AI.
☐ Never enter personal identifiable information (PII) into AI tools.
AI4K12 Initiative. "Five Big Ideas in Artificial Intelligence Poster." AI4K12 Initiative, AI4K12, ai4k12.org/wp-content/uploads/AI4K12_Five_Big_Ideas_Poster_v2.pdf
AI Guidelines - January 2025. York Region District School Board, Jan. 2025. https://yrdsb.sharepoint.com/sites/CIS-DigitalLiteracy/SitePages/Artificial-Intelligence.aspx
Empowering Learners for the Age of AI: An AI Literacy Framework for Primary and Secondary Education. OECD, May 2025. Review Draft. https://ailiteracyframework.org/wp-content/uploads/2025/05/AILitFramework_ReviewDraft.pdf
Guidelines For Responsible Use of Generative Artificial Intelligence. OASBO Joint Collaborative Committee. https://www.ecno.org/services-programs/oasbo-generative-ai/
Commonwealth of Australia. Australia Framework for Generative Artificial Intelligence (A Framework). 17 November 2023. https://www.education.gov.au/schooling/resources/australian-framework-generative-artificial-intelligence-ai-schools
Ontario's Trustworthy AI Framework. Government of Ontario, 2024.https://www.ontario.ca/page/responsible-use-artificial-intelligence-directive
Ottawa Catholic School Board. Ottawa Student Guidelines (K-12). https://www.ocsb.ca/why-ocsb/humane-use-of-technology/artificial-intelligence-at-the-ocsb/
Simcoe County District School Board. Simcoe Educator Guidelines. https://www.scdsb.on.ca/elementary/use_of_technology_for_learning/artificial_intelligence
AI For Education. AI in Education: What Parent and Caregivers Should Know. 2025. https://static1.squarespace.com/static/64398599b0c21f1705fb8fb3/t/682f3b81d1f455327f5ab6e1/1747925889680/AI+in+Education+What+Parents+%26+Caregivers+Should+Know+%284%29.pdf
Waterloo Catholic District School Board. WCDSB Guidelines for Generative AI Student Use. 20 Mar. 2024. https://drive.google.com/file/d/14O6oVVIApHBk3y-9yYFaoLyOzrGkVYG3/view
York Region District School Board. Policy and Procedure #194.0, Appropriate Use of Technology. 2025, www2.yrdsb.ca/PP194-AppropriateUseOfTechnology
This guide was created with the assistance of an AI language model. The content has been thoroughly reviewed, edited and refined by staff of the York Region District School Board.
Images used are rightfully licensed and are not AI generated.
