Azure Synapse SQL Pools Unleashed!
Are you ready to unlock the massive power of Synapse SQL Pools? Do you want to see some of the features that are coming and what impact they will have on your workloads?
Join us to learn and apply the best strategy when it comes to ingesting and consuming your data with Azure Synapse! The day will be packed with tips and tricks for your Data Warehouse from two folks who work on the largest deployments in the world!
In the morning we will ease into Synapse and explaining all key features, from there on we will deep dive into the Polaris engine and how it changes the game. Focus of the morning will be around Dedicated SQL pools and how to get the best performance out of it. We will look at loading patterns and ways to optimize these, table distribution issues, security features, statistics handling, workload management, streaming data and indexing strategies!
After lunch we handle the other side of our Synapse Pool environment: Serverless. We will explain how it works, and how to optimize the reading patterns. We will touch upon delta lake and how it fits in, parquet file structure, flat file differences, security features, partitioning strategies, optimized data loading with Spark and many more.
After this day you will be able to tackle any issues you face to build a performant, cost efficient and dynamic data warehouse with Azure Synapse SQL Pools.SilverstoneMon 09:00 – 17:00
Data Goblin in-a-Day: Hands-On Power BI Report Design & User Co-Creation
Due to the interactive nature of the session, participation is limited to 12 attendees. It is recommended participants have a basic technical grasp of Power BI and are comfortable doing groupwork with others.
Feel free to also look at the visual agenda, available below:
A successful reporting project is about more than following and ticking boxes next to a requirements list. To design and launch a successful Power BI report, there must be sufficient understanding of the business users, their needs and how they use and think about the data. This is best done by co-creation - working with the users to design something that gets closer to the target. In practice, however, this is easier said than done.
The objective of this session is to learn about user engagement and report design practices both from prepared content and each other. Attendees will work in groups to create and present a business case proposal and Power BI report design using sample data and user co-creation.
This session will thus be an interactive, full-day workshop exploring how to engage with users and work hands-on with Power BI report design best practices. Attendees will be presented with a business case and sample data, then work in groups to engage with fictional user personas during interactive, role-played workshops. During the session, attendees will have to overcome typical reporting project challenges such as data availability, quality, unclear/changing requirements, technical or analytical challenges, etc. To overcome these challenges, attendees will have to work together and think outside the box, and engage with users to be able to design an effective report. At the end of the session, attendees will present the proposed solutions and compare to how they would have done it if they had only followed the requirements list.
The format of the session will be styled similar to a tabletop roleplaying game (tRPG). Attendees will engage with a game master (GM), who will roleplay different stakeholders and users (yes there will be weird voices and accents), and narrate the events surrounding the fictional reporting project, using dice to introduce random obstacles and challenges that attendees will have to solve to succeed.
PART 1: INTRODUCTION (1 H.)
The first part of the session serves as an introduction for the rest of the day. We will look at the materials for the mock business case, and also experience an introduction to a helpful principle for data design and best practices.
PART 2: VACUUM DESIGN PROPOSAL (30 MIN.)
The second part involves some solo work. Using the requirements and provided materials for the business case, we sketch a proposed report solution from existing templates. The objective is to propose an effective design based on the information and scope provided.
PART 3: CO-CREATION WORKSHOPS (5.5 HOURS WITH VARIOUS BREAKS)
In user co-creation workshops, groups engage with users role-played by the organizer. The objective is to learn who the users are and what they do, in the context of the data. This is how they define the report, and together with the users, the design and functionality.
PART 4: DESIGN SHOWCASE (1 HOUR)
In the final part of the day, groups will present their designs and explain how user feedback was incorporated. We will discuss how the designs and proposals changed before vs. after engaging with users.
Finally, we will recap the key take-aways from the session.
IMPORTANT INFO BELOW:
Are you submitting registration for this precon? If so, you can fill out this form, it will help us prepare!
https://forms.office.com/r/NQm4sdRQmmMonzaMon 09:00 – 17:00
Leveling Up Your Azure Skills—Implementing Data Platform like a Pro
There are over 200 services in Azure, which can make it overwhelming for many IT professionals. In this session, you will learn about the specifics of using Azure for data platform solutions. Starting with virtual machines and they should be configured, and then diving deeper into Platform as a Service solutions like Azure SQL Database, Managed Instance, CosmosDB, and Synapse Analytics you will learn how to make decisions about the best option for your workloads and how to integrate that with the broader Azure ecosystem. You will learn about how security with Azure Active Directory works, and how to implement it in your applications. You will also learn about additional services, like Azure Key Vault, Azure Functions, Logic Apps and Automation, which help you to bring make different components interact with each other.NürburgringMon 09:00 – 17:00
Power BI Administration and Governance training class
In this training session we will cover everything you need to know as a Power BI administrator or Power BI responsible. We will use theory and practice to learn concepts and apply them.
This full day course has four parts.
Part 1 Administrating Power BI - 1,5 hours
We will provision a Power BI environment, we will look at the main settings and figure out which of them are important and how the should be set according to best practices.
Part 2 Power BI Premium - 1 hour
We will setup Power BI premium and look through the administration options when it comes to dedicated capacity.
Part 3 The Power BI On-premises Data Gateway - 1 hour
We will investigate the role of the Enterprise Gateway, set it up, configure it and talk about how to administer it.
Part 4 Power BI Governance - 3,5 hours
We will look at Power BI governance and see how you can set governance strategy, create processes and roles and lastly but not least how you can monitor your Power BI environment.Spa-FrancorchampsMon 09:00 – 17:00
SQL Server Tools for Query Tuning
Query tuning within SQL Server can be a tough skill to master. The new tooling released with SQL Server 2016 and 2017 changes how you identify poorly performing queries, troubleshoot their behavior and tune the queries, all a little easier.
This workshop teaches new techniques for tuning queries using all the new tools introduced in SQL Server 2016 and 2017. You'll be able to put this knowledge to work immediately, not only in your 2016 or better instances but also in your Azure SQL Database databases. You will be tuning your queries faster and more accurately using the new tools available.Monte CarloMon 09:00 – 17:00
The latest data science workflow in Azure
Data science is a rapidly evolving skill set and keeping up to date with best practices for the workflow can be tough. In this one day course, we get hands-on with the latest tools to help us build reproducible, scalable models quickly and effectively.
Starting with an overview of Azure ML Services and the latest advancements like R Support, we'll connect to common data sources, build our feature engineering pipelines, train models, and then deploy them.
Throughout all of this Azure ML Services will be the foundation for our best practices workflow.
Bring your laptop and your Python or R skills for a day of practical learning that you can implement immediately.Marina Bay StreetMon 09:00 – 17:00
Agile SQL Development
You're familiar with SQL Development and have a way of doing things, because that is how it has always been. Sure, sometimes code on a server seems to disappear, and your (monthly, bi-monthly?) big deployments are always a 3-day task followed by 7 days of praying nothing goes wrong, but that's life! You've heard of this whole "agile" hype, maybe you're even working in sprints. It doesn't seem to fit your daily job though, and just makes life more complicated.
If you recognize part (or all) of this, this session is for you! We will talk about how to transform the way of working for your team and yourself by transitioning from a direct approach to agile, improving with every step along the way! Our aim is to soothe the pain points mentioned above, without suddenly overthrowing all the fundamentals about SQL development.
During this session we will cover working in Sprints, working in Database Projects, and introduce Source Control.Marina Bay StreetTue 09:45 — 60 min
DBA Red Team vs DEV Blue Team or "How to communicate with your Entity Framework developers."
Entity Framework (EF) has been around for some time, and what never seems to amaze me is the way in which developers and DBA's still don't seem to agree on whether EF is a good thing or not.
In this session:
I will try to explain how EF actually works so you as a DBA understand why developers love working with EF. What is "Code First"? What is "Database First"? Is their a middleground to be found?
I will also try to explain how you can work WITH your developers and get EF implemented and still have performance up to the high standards that you as DBA are used to.
Actually I will try to approach this from both the DBA as the Developer's point of view, A RED Team vs BLUE Team of Entity Framework if you will.MonzaTue 09:45 — 60 min
Introduction to Synapse Dedicated SQL Pool Distribution and Indexing
Data Architects, let's tackle one of the most common performance pitfalls that people encounter with Synapse Dedicated SQL Pools! We'll work from the ground up so anyone new to Synapse Dedicated SQL Pools will understand the different data distribution methods and index options. Then we'll go though recommendations on when you should pick the best options to increase your workload performance and possibly save you money which will make you look like a hero!Monte CarloTue 09:45 — 60 min
Learn to Effectively Use Extended Events
Too many people have looked at Extended Events, seen what looks like a horrible interface, heard about the XML, and have subsequently run away. This session is here to show you how to effectively use Extended Events to monitor your query performance, and more, in an efficient and useful way. The interface for Extended Events isn't bad, just grossly misunderstood. After you attend this session, you'll be able to easily do things that you've never been able to do with Trace. You need a more efficient query metrics tool, and it's waiting for you in Extended Events.SilverstoneTue 09:45 — 60 min
Protecting your data against unauthorised reads on SQL Server
The need to protect data from being read by unauthorized people has always been there. In the last years, this need has even increased. One of the first things that comes into mind is e.g. GPDR but there are many other cases why you need to protect your data (hacking, phishing, etc.). This session will show several possibilities going from the very basics to the more advanced options.NürburgringTue 09:45 — 60 min
Using Excel as a reporting and analysis tool for Power BI data
Once you have built a Power BI dataset there are several options for creating reports that use the data in it: classic Power BI reports, paginated reports and Excel. This session is all about the third option, Excel. You’ll learn how to connect Excel to a Power BI dataset and use PivotTables, Excel cube functions and the new Excel data types to build reports. You’ll also find out how to create Excel reports that can be viewed in the browser and not just on your desktop.Spa-FrancorchampsTue 09:45 — 60 min
Using machine learning in trading and finance with Python on Azure cloud
Ever wondered how to make money while you sleep? Well I do. In this session, I will
walk you through my learning experience of the last 18 months on how you can use Python, Azure, machine learning and a whole lot of data crunching as input for your investing or trading decisions. Spoiler alert: I did not take the deep dive yet ...
- Review concepts of quantitative trading strategies
- Using Power BI and Jupyter notebooks for trading strategies
- Exploring Python libraries in an end-to-end ML for trading (ML4T) workflow
- How to leverage the Azure cloud for algorithmic/quantitative trading.SuzukaTue 09:45 — 60 min
A deep dive into Azure Synapse Analytics
Based on the material of Analytics in a Day. I'll explain what it is, what you can use it for, and how it can improve your way of dealing with large amounts of data. In this session we'll focus on the Synapse Serverless and Synapse Spark features and the integration with PowerBI. A real live demo will also be providedMonte CarloTue 11:00 — 60 min
Calculation groups in Power BI
DAX allows you to write powerful measures in Power BI. But sometimes you get the feeling that part of these measures are merely an advanced copy-paste from a previous measure.
Calculation groups allow you to easily create, manage and use similar measures in Power BI.
This session illustrates what calculation groups can be used for and how to create them in Power BI Desktop and the Tabular Editor.Spa-FrancorchampsTue 11:00 — 60 min
Continuous Integration with Local Agents and Azure DevOps
One of the fundamental tools for modern DevOps software development is continuous Integration (CI). This process provides an automated independent validation and verification of your code, with quick feedback for developers that are under pressure to write higher quality software at a rapid pace. This session examines how Azure DevOps or TFS can be used to manage this process and run a database CI process inside of your data center, on your own instances.SilverstoneTue 11:00 — 60 min
How to be an inclusive leader in TECH
The new era of leadership is asking for inclusive leaders, who combine traditionally considered feminine and masculine behavioural styles. People working in TECH come from all kind of educational, cultural and religious backgrounds. Inclusive leader makes sure everyone feels heard, their contributions are valued and successes celebrated.Marina Bay StreetTue 11:00 — 60 min
How to setup your testing environment, the DevOps way
During the session I will present how I use tools like Packer, Terraform and PowerShell to spin up a test environment for SQL Server.
dbatools PowerShell module will be doing the heavy-lifting to setup and configure the SQL Server instances/clustersMonzaTue 11:00 — 60 min
Infrastructure as Code to an Azure Data Platform Architecture
Infrastructure as code is *the* way to deploy Azure services - right? But under looming deadlines and time crunches it can feel way faster to use the tried and tested method of going to the portal and creating resources manually. In the back of your head, you know there must be a faster, better way of managing resources - but how do you make the jump?
Without having a developer background it can be difficult to know where to even start with infrastructure as code. Especially if you are working with a team of multiple people managing and operating Azure resources - this requires a full cultural change!
In this session we will go through the concepts that you need to grasp in order to get started with infrastructure as code for Azure. We will look at how the process for managing resources in Azure changes compared to the way we've previously done it, and finally we will explore a complete workflow of deploying your Azure data platform architecture as code using Terraform and how to work on that code with your team.NürburgringTue 11:00 — 60 min
Learn how to use Deep Learning to make real-time predictions of your device location.
In this session we will explain how to deploy a Deep Learning model and use it to predict real-time the location of a device. Kafka will be used to ingest data from a device and to collect real-time distance data from cellphone towers.
We’ll explore two possible ways to use Kafka services in Azure either via Docker containers or leveraging the Kafka endpoints within Azure Event hubs.
We will show-case how to use Scala for building an event processing pipeline that prepares data for both training and prediction by the model, and how to deploy the model in production for real-time inference.
Key technologies used are: Azure Docker Hub, Azure Event Hubs, Azure SQL DB, Jupyter Notebooks using Python.SuzukaTue 11:00 — 60 min
A multi-tenant smart city data platform
An in-depth look at a modern smart city data platform used by multiple cities in Belgium. How is scaling handled? what about data harmonisation? Onboarding of new cities? ...?MonzaTue 13:00 — 60 min
Azure SQL Database - A true story journey from migration, through synchronization and beyond.
You will be challenged with a case study to design a solution using Azure SQL Database capabilities.
At the end of this session, you will better understand the possibilities to migrate on-premises databases to Azure SQL Database. You will learn how to synchronize data across different environments and get more insights on how to troubleshoot issues in Azure SQL Database.
This is a hands-on, open discussions and team work kind of session (not a traditional slideshow).
Attendees should already have basic knowledge of Azure SQL Database.NürburgringTue 13:00 — 60 min
Fast Forward your Career by using the Community
Getting involved in the Data Platform community can be beneficial for your career, both short-term as long-term. Not everyone has the desire to jump on a stage and start speaking for an audience, but luckily you don't need to! There are many options available to build your personal brand out there. In this session, we'll go over these options and see how they can have a great impact on your profession live!Marina Bay StreetTue 13:00 — 60 min
Five stages of grief - internals of a hash spill
You know the Hash Match operator. You know it requires a lot of memory. And you also know that it sometimes needs more memory than it has, in which case it spills to tempdb and is slow.
But do you know EXACTLY what happens under the cover in the case of a hash spill? Or does you knowledge stop at "it gets slow"?
Unless you work for Microsoft on the SQL Server engineering team, there really is no need to know the internals of a hash spill. But that doesn't mean you can't be curious! Would you like to waste some time to learn about dynamic role reversal, grace hash join, bail-out, bit-vector filtering, and more, then join this sessions!SilverstoneTue 13:00 — 60 min
Machine Learning in Azure Synapse
There is a lot of content available on Synapse for Data Engineering, but what about Machine Learning? In this session we will look at how to integrate a SparkML model in Synapse.SuzukaTue 13:00 — 60 min
Managing Power BI workspaces
Workspaces allow Power BI users to collect together datasets, reports, dashboards, dataflows and other Power BI objects and control access. As such the workspace is a central component in any Power BI deployment strategy. What is the best way to manage workspaces? The answer to that question (like so many others) is that it depends. In this session we will cover the most common ways to manage Power BI workspaces and try to uncover what method works best for different scenarios. We will try to touch on the whole lifecycle of the workspace from creation until deletion.
We will cover scenarios ranging everyone can create workspaces and manage them to automated workspace creation via a workflow which are managed by administrators.
What we will cover:
• Different ways to create workspaces
User can create workspaces
Only certain users/groups can create workspaces
Simple workflow without approval
Complex workflow with multiple approvals
• Managing workspaces
User manages their own workspaces
Admins manage all workspaces
Hybrid of the above
o (Semi) Automatic
Using APIs to manage certain aspect of workspaces
such as monitoring and deletion
• How workspace creation and management fits into the most
common content Lifecycle Management methods for Power
The audience will take away knowledge of different methods to manage workspaces and ideas on how to automate some aspects of it.Spa-FrancorchampsTue 13:00 — 60 min
Synapse vs Databricks. which road to take?
To enable scalable data engineering in the Modern Data Platform on Azure, we could use Synapse and/or Databricks, SQL & Spark and data science.
In this session, we will clarify how Azure Synapse compares with Databricks and share for which use-case Synapse or Databricks is a better choice. This session includes a demo which demonstrates the similarities or differences between the tools.Monte CarloTue 13:00 — 60 min
Building the Foundations of an Intelligent, Event-Driven Data Platform at EFSA
EFSA is the European agency providing independent scientific advice on existing and emerging risks across the entire food chain. On 27/03/2021 a new EU regulation (EU 2019/1381) has been enacted, requiring EFSA to significantly increase the transparency of its risk assessment processes towards all citizens.
To comply with this new regulation, delaware has been supporting EFSA in undergoing a large Digital Transformation program. We have been designing and rolling-out a modern data platform running on Azure and powered by Databricks. This platform acts as a central control tower brokering data between a variety of applications. It is built around modularity principles, making it adaptable and versatile while keeping the overall ecosystem aligned w.r.t. changing processes and data models. At the heart of the platform lie two important patterns:
1. An Event Driven Architecture (EDA): enabling an extremely loosely coupled system landscape. By centrally brokering events near real-time, consumer applications can react immediately to events from producer applications as they occur. Event producers are decoupled from consumers via a publisher/subscribe mechanism.
2. A central data store built around a lakehouse architecture. The lakehouse collects, organizes and serves data across all stages of the data processing cycle, all data types and all data volumes. Events streams from the EDA layer feed into the store as curated data blocks and are complemented by other sources. This store in turn feeds into APIs, reporting and applications, including the new Open EFSA portal: a public website developed by delaware hosting all relevant scientific data, updated in near real-time.
At delaware we are very excited about this project and proud of what we have achieved with EFSA so far.Monte CarloTue 14:15 — 60 min
Choosing the Right Data Store--An Overview of Azure Data Platform Choices
Azure offers many options for data platform services, and it can be confusing to choose the right platform for you application. In this session you will learn about all of the different Azure SQL and non-SQL data offerings and how they fit into you data ecosystem.NürburgringTue 14:15 — 60 min
Data Modeling and Partitioning in Azure Cosmos DB
For many newcomers to Cosmos DB, the learning process starts with data modeling and partitioning. How should you structure your model? When should you combine multiple entity types in a single container? Should you de-normalize your entities? What’s the best partition key for your data?
In this session, we discuss the key strategies for modeling and partitioning data effectively in Cosmos DB. Using a real-world NoSQL example based on the AdventureWorks relational database, we explore key Cosmos DB concepts—request units (RUs), partitioning, and data modeling—and how their understanding guides the path to a data model that yields the best performance and scalability. Attend this session, and acquire the critical skills you’ll need to design the optimal database for Cosmos DB.SilverstoneTue 14:15 — 60 min
I will survive!!! A Deep Learning Approach to Survival Modelling in Pytorch
Survival Modelling or Time-to-Event modelling tries to predict when an event will take place in the future. Hence, it is neither a classification nor a regression problem.
In recent years, survival modelling has moved from its early departure in biostatistics towards the world of deep learning. In this session, I will cover what survival modelling is, and how it can be implemented as a deep learning algorithmSuzukaTue 14:15 — 60 min
My Top 10 Power BI Tips, Tricks and Resources
Want to learn some excellent Power BI Tips and Tricks to make your Power BI reports sleek? In this session, I will talk about some of my favourite Power BI Tips and tricks assembled working with different clients and open data projects. The session will include some simple tricks like visual settings to advanced functionalities like commentary.
Spa-FrancorchampsTue 14:15 — 60 min
The Audience Conductor - Using Senses and Emotions to Improve Your Presentations
Technical presentations - deep knowledge, good content and a slick slide deck should do it, right? What if I add in storytelling elements to tie the whole thing together? Those are indeed the ingredients needed for a good presentation, but there is something more. Something that very few technical presenters consider. Something ephemeral, something hard to grasp. That something is the glue that binds a great presentation together. I'm talking about emotional investment, and in essence - how to conduct your audience like an orchestra.
By choosing the story and how that story is delivered, we can influence the emotional state of our audience. By engaging multiple senses and attaching emotions to a story, we can make the entire presentation much more memorable. This session is a bit of an emotional roller coaster as I show you how to create both highs and lows by choosing what story I tell and how I tell it. Come spend an hour learning skills you didn't think were part of the technical presenter's toolbox - but turned out to be essential to make your point stick.Marina Bay StreetTue 14:15 — 60 min
Tracking Changes & Dependencies with Easy Source Control for Power BI Desktop
Introducing Action BI Toolkit, a powerful collection of free utilities including a convenient, easy to use external tool and a powerful command line utility that bring source control for both your reports and your model, finally(!), to Power BI Desktop.
1) ActionBI-tools – a thorough how-to action packed demo featuring a new free community External Tool that allows you to check in your changes to your Power BI report (pbix file) as you work in Power BI Desktop, without having to break your workflow. Work seamlessly tracking changes with both your model and report in the same file. Or, just check-in your report if you are working on a ‘thin report’ that’s connected to your analysis services instance, a local Power BI file or Power BI Premium.
Together with VS Code or your source control tool of choice, learn to easily browse previous version of your M-code or DAX measures. Want to push it a little further, use ActionBI-tools to track your model and report dependencies, export snapshots of data.
2) Pbi-tools – see how to use a powerful command line utility for enterprise-grade multi-developer DevOps scenario using only Power BI Desktop. Merge changes in your pbix file, and programmatically generate a new pbix as your changes work through your DevOps pipeline. Generate deployable pbix files directly from source control without having to hold on to large binary files in your source control.MonzaTue 14:15 — 60 min
Azure-d Availability: SQL Server HA In and To the Cloud
Has your manager come to you and said "I expect the SQL Server machines to have zero downtime?" Have you been told to make your environment "Always On" without any guidance (or budget) as to how to do that or what that means? Are you facing pressure to have data in Azure as well? Help is here! This session will walk you through the high availability options in on-premises SQL Server, the high availability options in Azure SQL Database and Managed Instances, and how some or all of those can be combined to enable you to achieve the ambitious goals of your management. Beyond the academic knowledge, we'll discuss frequently seen scenarios from the field covering exactly how your on-premises environments and Azure services can work together to keep your phone quiet at night.NürburgringTue 15:30 — 60 min
Building trust in your data. Why data governance is the key to success.
Data governance is seen as boring and something you don't really need. But it is in fact the key to success for your data platform. As the amount of data sources, as well as the size of the data from the various sources increases - how can your users know what data is available? Or where to find it and what state it is in? When your users trust the data you deliver, they will use what you develop.
Another aspect to that trust is personal identifiable information - can citizens trust that you keep their data safe and respect their privacy?
This session will show you how to establish both kinds of trust, using Azure native tooling as well as third-party tools. But most importantly - tips on methodology, routines and people!Monte CarloTue 15:30 — 60 min
Coding from scratch a realtime Azure data and AI pipeline
In just 60 minutes, this session will demonstrate an end to end data pipeline supplemented by AI to achieve insights real-time. Using components like Azure Functions, Event Hubs, Databricks, Cognitive Services, and Power BI I'll be putting together a pipeline that takes the conference's social stream and analyses it Realtime. Join me as I show how quickly these sorts of systems can be put together for awesome insight.SuzukaTue 15:30 — 60 min
How Not To Drown: Improving Data Literacy in your Organization
Data literacy is fundamental for an effective organization and is a keystone for successful data & analytics projects. It is clear, however, that data literacy is a challenge area in many businesses and society as a whole. A 2018 survey of business decision-makers found only 24% were “confident in their ability to read, work with, analyze and argue with data” , a finding recapitulated in the lay public  and demonstrated plainly during the global COVID-19 pandemic .
To traverse these rapids, we are in need of more than tools, but a wide knowledge of how they are used and the data concepts behind them. In this talk, I use a hypothetical business case to:
I. Illustrate the concept of data literacy and the threat poor data literacy poses to an organization.
II. Explore possible reasons why average data literacy is low and what we could do about it.
III. Define practices and report design techniques to improve data literacy, such as points-of-engagement, data dictionaries, annotations, and more.
Without data literacy, we will fail to navigate the waters of information that are ever-rising; we risk not only falling overboard, but even drowning. However instilling data in organizational values, engaging people with and about information in the right ways, and implementing accessible design practices can not only keep us afloat, but propel us onward to brave new worlds of effectiveness and value.
 How to Drive Data Literacy in the Enterprise (2018)
 Borner, K. et al. (2016) Investigating aspects of data visualization literacy using 20 information visualizations and 273 science museum visitors. Information Visualization 15(3) 198-213.
 Advancing Data Literacy in the Post-Pandemic World (2021)Marina Bay StreetTue 15:30 — 60 min
Implementing Change Data Capture for your Master Data Sources through ADF
CDC or Change Data Capture is an important and necessary technique in the world of data warehouses. In any DWH solution, for the master data sources, we need to capture the change and historize the data in the correct way. Having a common module or automating this CDC will be a value add for any business looking to implement DWH.MonzaTue 15:30 — 60 min
Lessons learned from 5 years of Power BI implementations
Power BI has come a long way since it has been on the market for over 5 years. In that time we have seem Power BI grow from self service to full blown enterprise BI platform. In this session we will share the most important lessons learned over that time to be successful and avoid pitfalls for example with self service vs enterprise, modelling, performance, security, data lakes, COE and exec sponsorship.Spa-FrancorchampsTue 15:30 — 60 min
Your Performance Tuning Check List
Everyone needs a list of what to double check in your environment to ensure your servers and databases are optimized in order to make SQL Server run faster, better and more efficiently. In this session you will learn about what you need to review and implement to get a performance and configuration improvements. You will learn about best practice configuration, SQL Server surface area, memory optimizations, isolation levels, Tempdb and transaction log configuration.SilverstoneTue 15:30 — 60 min
Automating Power BI deployments: a walk in the park.
Ever thought about expanding your Power BI development workflow to different environments and no clue which approach to follow? Perhaps you are already using different environments but you are struggling with parameters, refresh schedules etc. ?
In this talk we shall discuss the abovementioned issues, provide you with some possible solutions and empower you with the knowledge to implement these for your own use cases.
As there are a good number of ways to deploy your Power BI art pieces manually as well as automated, this session has the purpose of instructing participants in some of the more prevalent ones to overcome this tedious task.
Participants will experience a deep-dive in three different methods:
Power BI Premium and deployment pipelines
Azure DevOps and Power BI Actions
Azure DevOps and Powershell
Session participants do not necessarily require any experience in Azure Devops, nor PowerShell.MonzaTue 16:45 — 60 min
Building The Next Delta Lakehouse
We've been building data lakes for years, and we've seen how the Delta lake file format brought stability & governance to the data lake, but with the Databricks Delta Engine, it's grown into an analytics powerhouse.
This session dives into those new features, showing how data engineering has been transformed and simplified. We'll see how to apply new technologies to old techniques, how best to get your data to your users and work through how to apply delta to your own lake-based scenarios.Monte CarloTue 16:45 — 60 min
Don't trust your models, explain them!
More companies are using machine learning to improve their products and services. This is great, as we can all see the benefits of this. The big question is, however, can you trust these models? We’ve already seen some examples of failing machine learning models.
In this talk I’ll show you how to break open the black box of machine learning and get an interpretation of what machine learning models are doing. We’ll cover the following topics:
* Using explainers to uncover the structure of your model
* How to debug your model using explainers
* How to use explainers in production to help your users
By the end of the session you’ve learned what explainers are, how to use them, and what tools you can use to make them.SuzukaTue 16:45 — 60 min
Dynamic Search Conditions
A common requirement in database applications is that users want a function to search a set of data from a large set of possible search conditions. The challenge is to implement such searches in a way that is both maintainable and efficient in terms of performance. This session looks at the two main techniques to implement such searches and highlights their strengths and limitations.SilverstoneTue 16:45 — 60 min
Keeping your data fresh in Power BI
We all want our data refresh to happen quickly so the most current data is available for our reports. In this session we will walk you through options to configure refreshing your data but more importantly we will help with performance. We'll look at how to identify bottlenecks and then how to optimize at different points to get the most out of your Power BI refresh.Spa-FrancorchampsTue 16:45 — 60 min
Let's have a meeting on meetings!
In which I'll give you tips, tricks and perspectives on how to participate in meetings to better meet the meeting-objectives. To improve the gathering of information in your gatherings. To assemble an agenda of your assembly. To conventiently convene your plans for and in your convention. To get together to get your get-together together. So, prepare for practical particulars on perfecting the appointments in your profession.Marina Bay StreetTue 16:45 — 60 min
Security techniques for cross database access
Separation of data from procedures, functions and views is a common approach to protect data. If you deal with microservice architecutres you will quite often separate the "master data" from service data but need to have access to it.
What is the best practice to grant access to these data?
I will show you three common scenarios for discussion:
- Synonyms for remote data
- signed procedures
- TRUSTWORTHY options to access data
What do you think? What is the best approach to deal with these requirements?
In this session you learn the pro and the cons of each solution.NürburgringTue 16:45 — 60 min