Schools have always had libraries. They hold books, databases, catalogues, and quiet spaces for study. But in 2026, the question facing every school is no longer just what a library contains. It is what a library does. At our school, we set out to answer that question by rebuilding ours from the ground up as a digital learning ecosystem, and then paying close attention to what happened when students started using it alongside the most powerful cognitive tools they have ever had access to.

This is a story about two connected pieces of work. The first is the creation of our Digital Library, a platform that goes well beyond resource access to teach students how to think, research, and use AI with genuine intellectual integrity. The second is an emerging system for understanding how students actually use AI at school, not to police them, but to learn alongside them about what good AI-supported learning looks like in practice.

Both projects share a conviction: that leading learning in this moment means doing more than writing policies about what students should not do with AI. It means building the environments and developing the insights that help young people use these tools in ways that are genuinely lifeworthy.

What Every Library Has

Our library's online presence has served the school community for a number of years. Students can search the catalogue, access eBooks and audiobooks through BorrowBox, browse academic databases including JSTOR, World Book, and Google Scholar, and find curated research guides through LibGuides. The library connects students to public library memberships at the State Library of Queensland and Sunshine Coast Libraries, extending access beyond school hours. Students can request new books for the collection. Staff can direct students to ClickView for curriculum-aligned video content.

These are essential foundations. They represent decades of library practice refined for the digital age. But they are also, increasingly, table stakes. Every well-resourced school library offers some version of this. In 2025, we made the decision to move away from the costly packaged platforms that host static library websites and build something custom, something that understood the philosophy we wanted to embed in a digital resource for our students. The question that drove that redesign was: what else should a library be doing in a world where students carry AI assistants in their pockets?

Our College Research Assistant and the Idea That Libraries Should Teach Thinking

For a number of years, our College had a research assistant who worked alongside teacher librarians to support student and staff research. When our long-serving and deeply valued research assistant retired in 2024, we knew we had an opportunity. The role had always been important. Now it had the chance to become something transformative. We needed someone who would take the great work that had been built and carry it into a new realm, someone who was asking big questions about AI and its impact on research and learning, not from a place of anxiety but from genuine curiosity about what might be possible.

That is where Sandy Robinson came in. Sandy brings a deep love of traditional library practice and emerging technology, combined with an instinct for building things rather than just curating them. She understood early that the challenge of AI in schools is not primarily a compliance problem. It is a learning design problem. Students do not need to be told to stop using AI. They need to be taught how to use it in ways that make them smarter rather than lazier, more critical rather than more dependent. And the library, she argued, is the natural home for that work, because libraries have always been about the relationship between people and information.

Working with myself and other faculty, Sandy designed and built our Digital Library as a platform organised around six interconnected learning hubs. It retains everything a traditional library offers, but layers on top of it a comprehensive framework for AI literacy, research methodology, critical thinking, and subject-specific learning tools.

What a Modern Library Looks Like

The Digital Library is built around a deliberate pedagogical philosophy. AI is embedded in nearly every section, not as a tool to be used uncritically but as something students must understand, evaluate, and use with judgement. The platform teaches frameworks over facts, on the basis that knowing how to think transfers across every subject and every tool, including AI tools that do not yet exist.

The Research Hub is a structured six-module research course, differentiated for Years 7 to 9 and Years 10 to 12, that teaches a complete academic research workflow from first principles through to AI-assisted research with integrity. It includes purpose-built interactive tools: a Truth Terminal for fact-checking AI outputs using the SIFT framework (Stop, Investigate the source, Find better coverage, Trace claims to their origin), a Search Commander for advanced search skills, a Thesis Builder for constructing evidence-based arguments, and multi-database search tools that simultaneously query Google Scholar, Semantic Scholar, and Google Books.

The AI Literacy Foundations section teaches students how AI actually works, from neural networks and training data through to the practical reality that AI systems are, as the platform puts it, "built to be people-pleasers" that would rather fabricate a fact than admit ignorance. Students learn about hallucinations, bias, the phantom bibliography problem where AI invents plausible-sounding citations, and the critical importance of tracing every AI-generated claim back to a real, human-authored source.

The Skills Hub presents five cognitive skills as a transferable toolkit: inquiry and investigation, evidence literacy, communication and representation, critical and ethical reasoning, and systems thinking. Each is taught as a set of mental moves that apply across subjects and across tools, recognising that the thinking skills required to evaluate an AI output are the same skills required to evaluate a newspaper article or a peer's argument.

The VERIFY Framework gives students a structured cycle for evaluating AI outputs: Validate the task, Examine the system, Research the source, Investigate author and bias, Filter for fairness, and apply Your final judgement. The EPFL Protocol (Explore, Plan, Find, Learn) provides a five-stage metacognitive loop for every AI interaction: declare your Intention and knowledge state before you open any tool, Explore the topic with AI as a thinking partner, Plan your research structure, Find credible human-authored sources, and Learn by synthesising understanding that is genuinely your own.

AI is a thinking tool and partner, not a replacement for your brain.

What holds all of this together is a clear institutional philosophy: the student remains the author of their own learning. AI use is not evaluated by which tool was opened. It is evaluated by whether the student's thinking, judgement, and agency were genuinely engaged.

Reading the Data: How Students Actually Use AI

Building the library gave our students the frameworks and tools to use AI well. But it also raised a question we could not answer from philosophy alone: what is actually happening? How are students using AI across the school day? Does their usage align with the kind of learning we are trying to cultivate? And what can the patterns tell us about our own teaching and assessment design?

This is the second piece of work, and it is still emerging. We are developing an AI Learning Insights Dashboard that draws on firewall and gateway data, cross-referenced with curriculum information from our learning management system, to build a picture of how AI tools are being used in the context of what students are actually studying.

The design principles are important. This is not a surveillance system. The first principle is learning before compliance. The second is patterns over individuals. The third is context before judgement. The fourth is signals, not certainty. And the fifth is that teacher agency and professional judgement remain central at every stage.

What the data can show us about good learning. When we see short, frequent AI interactions spread across a unit of work, use outside of assessment deadlines, repeated clarification-and-checking behaviour, and cross-tool usage patterns where students move between AI tools and academic databases, we are seeing evidence of self-directed learning. We are seeing AI used as an apprentice rather than a replacement. These are positive signals, and they validate the philosophy underpinning the Digital Library.

What the data can show us about lazy use. When we see heavy generative AI use concentrated in the final hours before a deadline, with no subsequent editing or revision, no engagement with source-verification tools, and single-prompt interactions that end immediately after AI output, we are seeing something different. We are seeing AI used as an answer-generating machine rather than a thinking partner. These patterns do not necessarily indicate academic dishonesty, but they do indicate that the learning we designed for is not happening.

What the data can show us about assessment design. When particular assessment tasks consistently produce spikes in generative AI usage with minimal evidence of genuine student engagement, the problem may not be the students. The problem may be the task. If an assignment can be completed by copy-pasting AI output, then the assignment is testing the wrong thing. The dashboard gives us evidence to have that conversation with faculty, not as blame but as an opportunity to redesign learning experiences that are genuinely resistant to passive AI use because they require thinking that AI cannot do for you.

The distinction is not about which tool is opened. It is about whether the student's thinking, judgement, and agency are genuinely engaged.

The Leadership Lens: Diverse Skills, Open Dialogue, Shared Curiosity

Neither of these projects emerged from a single role or a single team. They emerged from a deliberate leadership approach that values diverse skills and perspectives, and that invites input from staff well beyond the traditional roles we might consult for a given challenge.

The AI Insights Dashboard, for example, did not come from an academic committee. It came from my technical team, people whose daily work involves networks, security, and infrastructure, but who could see that the data passing through their systems contained something worth understanding. Their initial concept became the basis for broader conversations with colleagues across the College: what could this data tell us? What actually matters? How do we ensure the tool serves our philosophy of learning rather than drifting into surveillance?

Those conversations, open and genuine, shaped the dashboard's design principles. And as we moved into the second iteration of the tool, the two products began evolving side by side. Insights from the dashboard informed how Sandy and the team refined the Digital Library. The library's philosophy sharpened how we interpreted the data. Each informed the other, and both kept improving because we were willing to keep asking questions rather than settling for the first answer.

This is what leading learning looks like in practice. It is not a top-down directive about AI policy. It is creating the conditions for a diverse group of professionals to contribute their expertise, to challenge each other's assumptions, and to build something together that none of them could have built alone. A research assistant who sees the future of libraries. A technical team who sees the learning story hidden in network data. Faculty who know their students and their subjects. When you bring these perspectives into genuine dialogue about what matters, you get work that is both philosophically grounded and practically useful.

Two Products, One Conversation

The Digital Library and the Insights Dashboard are two sides of the same coin. The library articulates what we believe good AI-supported learning looks like: intentional, critical, creative, and grounded in human agency. The dashboard shows us whether that vision is translating into practice.

Where the two align, we gain confidence that the frameworks are working. Where they diverge, we gain something equally valuable: evidence that something needs to change, whether that is how we teach a particular skill, how we design a particular assessment, or how we support a particular group of students.

The goal is not to catch students doing the wrong thing. It is to understand what is actually happening so that we can lead learning more thoughtfully. And the curiosity that drives both projects is the same curiosity we want to see in our students: the willingness to ask what is actually going on, to look honestly at what we find, and to keep iterating toward something better.

Lifeworthy Learning

Underneath all of this is a conviction that education in 2026 must be lifeworthy. The skills students develop, the habits of mind they form, and the relationship they build with information and technology need to serve them long after they leave school.

Traditional approaches to assessment, where students reproduce information under controlled conditions, are increasingly misaligned with a world in which AI can generate competent text on demand. The challenge for schools is not to find better ways of detecting AI use. It is to design learning experiences that are worth doing whether AI exists or not, because they develop the thinking, creativity, and judgement that no tool can replicate.

The Digital Library is our attempt to build the infrastructure for that kind of learning. The Insights Dashboard is our attempt to understand whether it is working. Neither is finished. Both are evolving. Both keep the curiosity flames burning. And together they represent a way of leading learning that takes AI seriously without being paralysed by it, and that keeps the focus where it belongs: on the students, on learning that matters, and on the learning we want to serve.