Many organizations struggle with complex data challenges. Examples include tracking data usage (both transactional and analytical), properly managing and maintaining historical data, synchronizing source systems, reconstructing events (operational lineage), making data and reports accessible via metadata, streamlining data exchange, and preparing data for AI applications.
Often, the solution is sought in reference architectures based on, for example, a data warehouse, data lake, data lakehouse, or data fabric. While valuable, these architectures do not fully address the challenges mentioned above. They focus only on part of the data journey and fail to solve the core problems.
To truly tackle these challenges, a data architecture must cover the entire data journey: from source to insight. Only a holistic approach can achieve this. During this session, we will discuss a data architecture that spans the full data journey. The previously mentioned architectures may play a role within that architecture, but only as components of a larger whole.
This session will cover, among other topics:
Hallucinations from AI can destroy trust in BI outputs. This technical session walks through building an LLM-powered analytics assistant that only answers from governed, verified data. Using Snowflake Cortex Semantic Models, Cortex Analyst, and Cortex Search, we’ll map business terms to actual definitions, auto-generate safe SQL and trace every step for auditability. You’ll see the full stack in action, with architecture diagrams and code patterns you can implement.
Key takeaways:
Are your data governance efforts stuck in endless debate cycles or only looks good on paper, with little to show for it? The Data Governance Sprint™ is a proven, accelerated method to establish practical data governance foundations in just five weeks. This session introduces a structured, workshop-based approach that moves beyond theory and delivers tangible outcomes: clear roles, a business glossary, an operating model, and early wins that build momentum. Designed for data leaders and practitioners, this methodology helps you overcome alignment struggles, engage stakeholders, and demonstrate measurable progress—fast.
The data lake landscape is undergoing a fundamental transformation. Traditional Hive tables are giving way to a new generation of open table formats—Apache Iceberg, Apache Hudi, Delta Lake, and emerging contenders like DuckDB—each promising to solve the inherent challenges of managing massive datasets at scale.
But which format fits your architecture? This session cuts through the marketing noise to deliver practical insights for data architects and engineers navigating this critical decision. We’ll explore how these formats tackle schema evolution, time travel, ACID transactions, and metadata management differently, and what these differences mean for your data platform’s performance, reliability, and total cost of ownership.
Drawing from real-world implementations, you’ll discover the hidden complexities, unexpected benefits, and common pitfalls of each approach. Whether you’re modernizing legacy Hive infrastructure, building greenfield data lakes, or evaluating lakehouse architectures, you’ll leave with a clear framework for choosing and implementing the right open table format for your specific use case—and the confidence to justify that decision to stakeholders.
Highlights:
Data Mesh, coined by Zhamak Dehghani, is a framework for federated data management and governance that gets a lot of attention from large organizations around the world facing problems with bottlenecked data teams and sprawling solution spaces. While the core principles of Data Mesh are well established in the literature, and practical implementation stories have started to emerge giving meat to the theoretical bones, some questions remain.
One of the biggest challenges is managing business context across multiple domains and data products. In this session, we will discuss how data modeling can be used to enable both within-domain design of understandable and discoverable data products as well as cross-domain understanding of domain boundaries, overlaps, and possibly conflicting business concepts. The well-known best practices of conceptual and logical data models prove their worth in this modern de-centralized framework by enabling semantic interoperability across different data products and domains, as well as allowing the organization to maintain a big picture of their data contents.
Topics and discussion points:
Organisations increasingly rely on data but often lack a clear understanding of what they actually manage: insight is missing, datasets are poorly mapped, and metadata is scattered. A solid data administration provides structure and clarity. In this session, you will discover why this foundation is indispensable — and how to build it in practice.
Topics covered in this session:
We are used to managing data before deploying AI: carefully collecting, cleaning and structuring it. But that is changing. AI now helps to improve data itself: automatically enriching, validating, integrating and documenting it. We are moving from static management to dynamic improvement: AI brings data to life and changes how we deal with it.
Topics and discussion points:
Our speaker built his first concept model in 1979. It wasn’t very good. In fact, it looked like a hierarchical IMS physical database design. Eventually, over many modelling assignments around the globe, in every kind of organisation and culture, a small number of core principles emerged for effective modelling. All revolve around the idea we’re modelling for people, not machines. It turns out, even in the age of AI, virtual work, misinformation, and constantly changing technology, these lessons are proving to be just important as – or even more important than – ever. After all, we’re only human.
1. Data Modelling doesn’t matter (at first) – just start with a nice conversation.
2. Getting to the essence – “What” versus “Who, How, and other distractions.”
3. Good things come to those who wait – why patience is a virtue.
4. Be fearless, and play to your strengths – vulnerability and ignorance.
5. Every picture tells a story, except those that don’t – hire a graphic designer.
6. Bonus – your concept model is good for so much more than “data.”
Many organizations struggle with complex data challenges. Examples include tracking data usage (both transactional and analytical), properly managing and maintaining historical data, synchronizing source systems, reconstructing events (operational lineage), making data and reports accessible via metadata, streamlining data exchange, and preparing data for AI applications.
Often, the solution is sought in reference architectures based on, for example, a data warehouse, data lake, data lakehouse, or data fabric. While valuable, these architectures do not fully address the challenges mentioned above. They focus only on part of the data journey and fail to solve the core problems.
To truly tackle these challenges, a data architecture must cover the entire data journey: from source to insight. Only a holistic approach can achieve this. During this session, we will discuss a data architecture that spans the full data journey. The previously mentioned architectures may play a role within that architecture, but only as components of a larger whole.
This session will cover, among other topics:
Hallucinations from AI can destroy trust in BI outputs. This technical session walks through building an LLM-powered analytics assistant that only answers from governed, verified data. Using Snowflake Cortex Semantic Models, Cortex Analyst, and Cortex Search, we’ll map business terms to actual definitions, auto-generate safe SQL and trace every step for auditability. You’ll see the full stack in action, with architecture diagrams and code patterns you can implement.
Key takeaways:
Are your data governance efforts stuck in endless debate cycles or only looks good on paper, with little to show for it? The Data Governance Sprint™ is a proven, accelerated method to establish practical data governance foundations in just five weeks. This session introduces a structured, workshop-based approach that moves beyond theory and delivers tangible outcomes: clear roles, a business glossary, an operating model, and early wins that build momentum. Designed for data leaders and practitioners, this methodology helps you overcome alignment struggles, engage stakeholders, and demonstrate measurable progress—fast.
The data lake landscape is undergoing a fundamental transformation. Traditional Hive tables are giving way to a new generation of open table formats—Apache Iceberg, Apache Hudi, Delta Lake, and emerging contenders like DuckDB—each promising to solve the inherent challenges of managing massive datasets at scale.
But which format fits your architecture? This session cuts through the marketing noise to deliver practical insights for data architects and engineers navigating this critical decision. We’ll explore how these formats tackle schema evolution, time travel, ACID transactions, and metadata management differently, and what these differences mean for your data platform’s performance, reliability, and total cost of ownership.
Drawing from real-world implementations, you’ll discover the hidden complexities, unexpected benefits, and common pitfalls of each approach. Whether you’re modernizing legacy Hive infrastructure, building greenfield data lakes, or evaluating lakehouse architectures, you’ll leave with a clear framework for choosing and implementing the right open table format for your specific use case—and the confidence to justify that decision to stakeholders.
Highlights:
Data Mesh, coined by Zhamak Dehghani, is a framework for federated data management and governance that gets a lot of attention from large organizations around the world facing problems with bottlenecked data teams and sprawling solution spaces. While the core principles of Data Mesh are well established in the literature, and practical implementation stories have started to emerge giving meat to the theoretical bones, some questions remain.
One of the biggest challenges is managing business context across multiple domains and data products. In this session, we will discuss how data modeling can be used to enable both within-domain design of understandable and discoverable data products as well as cross-domain understanding of domain boundaries, overlaps, and possibly conflicting business concepts. The well-known best practices of conceptual and logical data models prove their worth in this modern de-centralized framework by enabling semantic interoperability across different data products and domains, as well as allowing the organization to maintain a big picture of their data contents.
Topics and discussion points:
Organisations increasingly rely on data but often lack a clear understanding of what they actually manage: insight is missing, datasets are poorly mapped, and metadata is scattered. A solid data administration provides structure and clarity. In this session, you will discover why this foundation is indispensable — and how to build it in practice.
Topics covered in this session:
We are used to managing data before deploying AI: carefully collecting, cleaning and structuring it. But that is changing. AI now helps to improve data itself: automatically enriching, validating, integrating and documenting it. We are moving from static management to dynamic improvement: AI brings data to life and changes how we deal with it.
Topics and discussion points:
Our speaker built his first concept model in 1979. It wasn’t very good. In fact, it looked like a hierarchical IMS physical database design. Eventually, over many modelling assignments around the globe, in every kind of organisation and culture, a small number of core principles emerged for effective modelling. All revolve around the idea we’re modelling for people, not machines. It turns out, even in the age of AI, virtual work, misinformation, and constantly changing technology, these lessons are proving to be just important as – or even more important than – ever. After all, we’re only human.
1. Data Modelling doesn’t matter (at first) – just start with a nice conversation.
2. Getting to the essence – “What” versus “Who, How, and other distractions.”
3. Good things come to those who wait – why patience is a virtue.
4. Be fearless, and play to your strengths – vulnerability and ignorance.
5. Every picture tells a story, except those that don’t – hire a graphic designer.
6. Bonus – your concept model is good for so much more than “data.”
Data Mesh is a federated approach to data management and governance developed by Zhamak Dehghani. It’s structure is based on domains and data products, elements that have also seen wide attention from organizations that are not otherwise working towards a full Mesh implementation. Working with autonomous domains who share data to the rest of the organization via data products is an excellent way to bring data work closer to the business and to allow domain-specific prioritization instead of a massive centralized bottleneck team. However, with domains having their own understanding of business and its core concepts, semantic interoperability becomes a challenge. This workshop focuses on the problems of Information Architecture in a de-centralized landscape. How can we document what data we have available, how do we understand what other teams’ data means, and how do we maintain a big picture of what is where? We will explore conceptual modeling as a key method of documenting the business context and semantics of domains and data products, more detailed logical modeling as a means to document data product structures, and consider both within-domain and cross-domain linking of various models and objects in them. As a hands-on exercise, we will model a domain and design some example data products that maintain strong links with their domain-level semantics. The workshop will give you the basic skills to do data modeling at these higher levels of abstraction, and understanding of the key characteristics and challenges of the Data Mesh that affect the way we need to do data modeling.
Learning objectives
Who is it for
Detailed Course Outline
1. Introduction
2. Data Mesh basics
3. How conceptual models help with cross-domain understanding
4. Hands-on exercise: modeling a domain
5. Data modeling as part of data product design
6. Ensuring semantic interoperability at the domain boundary
7. Data Mesh information architecture operating model
8. Conclusion
Input will follow shortly.
Practical hands-on workshop with exercises that you will run on your own laptop.
Read lessWhether you call it a conceptual data model, a domain model, a business object model, or even a “thing model,” the concept model is seeing a worldwide resurgence of interest. Why? Because a concept model is a fundamental technique for improving communication among stakeholders in any sort of initiative. Sadly, that communication often gets lost – in the clouds, in the weeds, or in chasing the latest bright and shiny object. Having experienced this, Business Analysts everywhere are realizing Concept Modelling is a powerful addition to their BA toolkit. This session will even show how a concept model can be used to easily identify use cases, user stories, services, and other functional requirements.
Realizing the value of concept modelling is also, surprisingly, taking hold in the data community. “Surprisingly” because many data practitioners had seen concept modelling as an “old school” technique. Not anymore! In the past few years, data professionals who have seen their big data, data science/AI, data lake, data mesh, data fabric, data lakehouse, etc. efforts fail to deliver expected benefits realise it is because they are not based on a shared view of the enterprise and the things it cares about. That’s where concept modelling helps. Data management/governance teams are (or should be!) taking advantage of the current support for Concept Modelling. After all, we can’t manage what hasn’t been modelled!
The Agile community is especially seeing the need for concept modelling. Because Agile is now the default approach, even on enterprise-scale initiatives, Agile teams need more than some user stories on Post-its in their backlog. Concept modelling is being embraced as an essential foundation on which to envision and develop solutions. In all these cases, the key is to see a concept model as a description of a business, not a technical description of a database schema.
This workshop introduces concept modelling from a non-technical perspective, provides tips and guidelines for the analyst, and explores entity-relationship modelling at conceptual and logical levels using techniques that maximise client engagement and understanding. We’ll also look at techniques for facilitating concept modelling sessions (virtually and in-person), applying concept modelling within other disciplines (e.g., process change or business analysis,) and moving into more complex modelling situations.
Drawing on over forty years of successful consulting and modelling, on projects of every size and type, this session provides proven techniques backed up with current, real-life examples.
Topics include:
Learning Objectives:
Artificial intelligence promises transformative business value, but without strong governance foundations, AI initiatives risk being biased, opaque, or non-compliant. Organizations are increasingly expected—by regulators, customers, and society at large—to ensure AI systems are ethical, explainable, and trustworthy. Yet, most governance efforts remain fragmented: AI governance is treated separately from Responsible AI principles, while Data Governance operates in a silo.
This seminar connects the dots. Participants will gain a comprehensive understanding of how Data Governance underpins Responsible AI, and how AI Governance frameworks operationalize ethics and compliance in practice. Combining strategy, case studies, and hands-on frameworks, the course provides attendees with the tools to design and implement governance approaches that make AI not only innovative, but also reliable and responsible.
Learning Objectives
By the end of this seminar, participants will be able to:
Who is it for?
Part 1 — Foundations & Risks
Part 2 — Frameworks & Practices
Part 3 — Connecting the Dots & Implementation
Also book one of the practical workshops!
Three top rated international speakers will deliver compelling and very practical post-conference workshops. Conference attendees receive combination discounts so do not hesitate and book quickly because attendance in the workshops is limited.
Payment by credit card is also available. Please mention this in the Comment-field upon registration and find further instructions for credit card payment on our customer service page.
View the Adept Events calendar
“Good quality content from experienced speakers. Loved it!”
“As always a string of relevant subjects and topics.”
“Longer sessions created room for more depth and dialogue. That is what I appreciate about this summit.”
“Inspiring summit with excellent speakers, covering the topics well and from different angles. Organization and venue: very good!”
“Inspiring and well-organized conference. Present-day topics with many practical guidelines, best practices and do's and don'ts regarding information architecture such as big data, data lakes, data virtualisation and a logical data warehouse.”
“A fun event and you learn a lot!”
“As a BI Consultant I feel inspired to recommend this conference to everyone looking for practical tools to implement a long term BI Customer Service.”
“Very good, as usual!”