Many organisations have solid internal documentation, but as it grows, it becomes harder for people to actually find what they need. Recently, we worked with an Australian energy distributor to solve this by building an AI-powered chatbot that lets employees query their internal wiki in plain language.
The data team had been spending a significant amount of time trawling through documentation to respond to stakeholder questions, verify data flows, and troubleshoot issues. The chatbot was designed to help them reclaim this lost time, streamline knowledge access, and improve the overall stakeholder experience on the platform.
The solution was built entirely in Databricks, which makes working with and deploying AI Agents much simpler than many other approaches. The client’s data team was already using Databricks for analytics and data engineering, so there was no need to stand up new infrastructure or learn a new platform. It showed just how low the barrier to entry can be for building practical AI agents.

What We Built
The Chatbot uses an Agentic Retrieval-Augmented Generation (RAG) approach. It references data from the organisation’s Azure DevOps Wiki, retrieves the most relevant content (including images, tables and charts), can search for related content if required, and uses a Large Language Model (LLM) to generate clear, grounded answers.

Everything runs within the Databricks environment:
- Unity Catalog holds cleaned and structured wiki content.
- Vector Search enables fast, semantic retrieval.
- Custom Tools enhance the retrieval and provide more context to the agent
- LangChain and LangGraph manage the agentic flow and tool calls.
Databricks also makes it straightforward to use MLflow to monitor the chatbot’s performance over time. We could track how well responses aligned with user intent, measure groundedness (was the answer actually coming from the documentation), and experiment with improvements without disrupting the deployed chatbot system. Having these evaluation capabilities built directly Databricks and linked in to Unity Catalog keeps things transparent and easy to manage.
Why It Worked
A large part of the project’s success came down to the data preparation phase. A lot of effort went into cleaning, structuring, and enriching the wiki content to make it easy for the chatbot to find the most relevant information. The preprocessing work ensured the system was able to understand the context of what it was retrieving and not just what the text was saying.
It was also hugely beneficial that the underlying wiki content was already high quality. It was well-written, up to date, and consistently maintained. There’s no point building an advanced retrieval system if the information it draws from isn’t correct to begin with. This highlights once more that good data is still the most important factor when building AI Systems.

Next Steps
The current preprocessing and vector search framework provides a strong foundation for expansion. Because the system already structures, and embeds content within Unity Catalog, it can easily incorporate new data sources such as SharePoint documents and other internal sources without major rework. We could also extend it with secure web search to blend trusted external information with internal knowledge. Integrating the chatbot into Microsoft Teams to make it accessible directly within everyday workflows would also be beneficial for users.Takeaway
This project showed how accessible AI agent development can be when built on top of existing data platforms. For organisations already using Databricks, there’s no reason not to start exploring how these tools can bring their internal knowledge to life.Start transforming your internal knowledge today
One51 helps organisations turn complex documentation into simple, AI-powered answers—securely, reliably, and using the tools you already trust.












