Microsoft Azure is a daunting thing to look at and think about. In this session, we'll look at the basics of Azure, what the options are for setting up an Azure account; as well as look at how regions are broken down. We'll take a look at the differences between Infrastructure as a Service, Platform as a Service, and Software as a Service as well as the High Availability and Disaster Recovery options that are available to do with these services. We'll finish the session off by looking at some options for Virtual Machines within Azure as well as the SQL Server offerings that are available on the platform.Go To CloudTue 8:45 AM — 60 min
Azure AutoML the Next Generation of Machine Learning
Automated machine learning simplifies the traditional machine learning process, by creating models with applying multiple algorithms on one dataset, doing cross-validation, hyperparameter settings, and feature selections. Automated ML selects the best models among all applied algorithms based on the performance of each of them. Moreover, it provides more detail about the experiment and how much time has been spent to run each iteration and so forth. In this one-hour session, an overview of what is Automated Azure ML will be presented and what is automated ML, what is notebook and interface designer will be explained.AITue 8:45 AM — 60 min
Business Analysis with T-SQL
No matter if you use SQL Server on premises, Azure SQL Database, or Azure Synapse Analytics, you might have a feeling that the main language for these services, the T-SQL language, is somehow obsolete and not capable enough for the need of the modern analytics. This session will show you the opposite. You can make advanced business analytics also with T-SQL, you do not need to use R, Python, or some other currently fashionable language for every single in-depth view in your data. You will learn how to use T-SQL for queries that you might already be familiar with, for aggregating queries. But this is only to warm you up. Then you will learn how to do hazard and survival analysis, market basket analysis with association rules using different measures like support and confidence, and even sequential market basket analysis, when there is a sequence in the basket. Finally, the session introduces the look-alike model, which is a mixture of k-nearest neighbor and decision trees algorithms.ExpertTue 8:45 AM — 60 min
Connecting the dots: CDM and Power BI architectures
The Common Data Model (CDM) is an open-source metadata system, a shared data model for commonly used business concepts and activities across domains to facilitate data interoperability. CDM can be used with other data services on Azure, and can enable collaboration among Data & AI professionals that code and not code. What are the possibilities and options of architecting our data solutions in using CDM, Azure data services and Power BI?Power BITue 8:45 AM — 60 min
Infrastructure-as-Code for your Data Solution in Azure: Terraform & ARM
Azure Data Solutions comprises of different components which makes it difficult to configure them consistently over different environments, and hold accountability for every change.
Manual infrastructure management will result in discrepancies which we want to avoid as most as possible. Unfortunately this also happens in Azure (e.g. forgetting to set the correct Access Control List for your datalakes). This is an error-prone and lengthy process when you want to deploy a Azure Data Solution.
With Infra-as-Code technology such as Terraform and ARM-Templates it's possible to deploy all the necessary components in a predictable and repeatable fashion. Important concepts such as security, networking and datalake layout is managed and versioned. In this session i will take you through all the necessary steps to take control of your data solution deployments. I will address the What, Why and How when using Terraform and ARM-Templates for your Azure Data Solution deployments.NewcomerTue 8:45 AM — 60 min
Real Time analytics with Azure IoT and Databricks
The number of IoT devices that streams data to a connected cloud backend increases daily. This data creates new possibilities for real-time analytics and can fundamentally change how our world works.
In this session, you’ll learn how to build an Azure IoT architecture that is ready for real-time data analytics. Sam will demonstrate how data can be ingested and how different Azure technologies can be applied to achieve real-time intelligence. You’ll also discover how Azure Stream Analytics can be used to run streaming queries in the Cloud and on the Edge. By the end of this session you’ll have an understanding on how Azure Time Series Insights works to set up a Real Time data exploration, and you’ll get a glimpse of Azure Databricks for more advanced data analytics scenarios. Finally, you’ll learn how to deploy custom code to detect and act upon events in the data.
We will demonstrate all these things using a Traffic Camera scenario.Business AnalyticsTue 8:45 AM — 60 min
SQL 2019 on Containers for Developers and DBA
This session talks and demos how SQL 2019 can be deployed in containers for developers and DBAs. We take you through the latest features available in SQL 2019 containers.Data AdministrationTue 8:45 AM — 60 min
Using Azure Functions and durable Entities for data processing
As ETL and Data Warehousing move to the cloud, the data sources and requirements have changed. How can you process streaming data for a cloud-bourne DWH? How can you pull data from REST APIs in batches? What tools do you have for processing data in the cloud? All these issues have multiple answers and one answer to many of them is Azure Functions. I will explain different features of Azure functions that can be utilized in different aspects of cloud-bourne Data Warehousing and Data Processing projectsDeveloperTue 8:45 AM — 60 min
Bringing your SAS environment to Azure
Microsoft & SAS announced on June 15 an extensive technology and go to market partnership, based on specific use cases we will explain the reasons, different architectures & lessons learned of the combination of SAS & Azure in the data & analytics platform & solutions.Go To CloudTue 10:00 AM — 60 min
Continuous Integration for the BI Developer
In many cases and BI projects, the manual process of development and especially deployment of artefacts led to sleepless nights, emergency hotfixes, grey hair and many hours of stress.
Especially in BI development, the usage of database projects, unit testing, version control, continuous integration and deployment is an underrepresented area.
Join me in this demo-heavy session where I will guide you through the not-so-frightful jungle of continuous integration using SQL Server database projects, SSIS, T-SQLt and Azure DevOps etc.
After this 60 minutes you'll have the guidelines and some ideas how easy it is to change your manual BI development and deployment into a powerful and automated no-brainer that - just works!Business AnalyticsTue 10:00 AM — 60 min
Debugging without debugger: investigating SQL Server's internal structures
Have you ever wanted to know exactly how SQL Server stores data for temporary structures, such as for instance the spooled data in a Table Spool or Index Spool operator? No? I don't blame you. It's a bit like wanting to know who wattered the rubber tree that produced the rubber your tires are made off before you get in the car to drive to work. You really don't need it.
But you might still WANT to know. I did. And I figured it out.
Without ever touching the debugger (I'm a simple soul, that stuff is much too complex for me!), I figured out how I could look at the exact storage structures SQL Server creates and uses to store data in Table Spool and Index Spool operators.
If you are willing to commit to a fast-paced high-level internals session that is guaranteed to teach you exactly zero actually useful information, then this is the session for you.ExpertTue 10:00 AM — 60 min
The DAX language is very powerful. Handy to compute the complex business values you need, but oh so frustrating if the DAX expression doesn't do what you expected it to do.
In this session several DAX expressions will be shown which do not always do what most Power BI users expect. Step by step these DAX expressions will be dissected and the evaluation context rules will be made clear such that you understand why the expressions behave different then expected.
This session aims at Power BI data model developers which have already a basic knowledge on writing DAX expressions and want to become more experienced in understanding the DAX evaluation context and extended tables.Power BITue 10:00 AM — 60 min
Distributed Availability Groups in SQL 2019
In this session a quick overview of what a Distributed Availability Group is and how it can help you with certain Disaster Recovery and migration headaches. Some real life scenario's and tips and tricks from the field will be included in this sessionData AdministrationTue 10:00 AM — 60 min
Explore a world of data and give some kudos to Kusto
How to discover hidden insights from streaming data? With the help of Azure Data Explorer and its intuitive query language Kusto, it’s now easy to find answers quickly in rapidly changing data. Azure Data Explorer, let’s you Focus on insights, not infrastructure, with the easy-to-use, fully managed data analytics service. On top of this service, the Kusto query language is a read-only request to process data and return results. During this session, you’ll learn what’s exactly Azure Data explorer and how it connects with other Azure services but also the basics of this language and see how powerful it can be, especially with Time Series. This session includes many demos and use-cases.DeveloperTue 10:00 AM — 60 min
From dev to DBA lessons learned
Going from a developer who writes SQL to a DBA can be quite a change. During that change you'll end up learning a ton of new things. We'll go over some of the things I learned myself when I started as a DBA. We'll discuss the use of functions, table variables, nesting and go over SARGabilityNewcomerTue 10:00 AM — 60 min
High-level framework for analyzing model explainability and algorithmic fairness
This session will present a high-level framework for the post-modelling explainability analysis and the detection of algorithmic unfairness. The framework is developed as an interactive web app that helps data experts selecting tools that better fit a specific use case. For example, if you have an image classification model developed with PyTorch, the framework will quickly provide a list of tools compatible with your use case and technologies used.
The session will cover the explainability with Azure Machine Learning as a part of the framework.AITue 10:00 AM — 60 min
Chaos Engineering for SQL Server
In this session we will explore the concept of Chaos Engineering, its core concepts, and how it has been implemented at various companies.
Then we'll look at how it can be implemented with regard to SQL Server. SQL has various different high availability solutions but can you be sure that they'll react as expected to a real world issue? Has your HA architecture only ever been tested in a planned maintenance window?
We'll also have some fun by looking at KubeInvaders, a chaos engineering tool for Kubernetes...using Space Invaders!
This session is for SQL Server DBAs and Developers who want to know how Chaos Engineering works and want to learn its main principles.Data AdministrationTue 11:15 AM — 60 min
Creating a photorealistic avatar using generative adversarial networks
Having an overview on how getting started with Microsoft Azure Machine Learning Studio we will start understanding generative deep learning algorithms.
Focusing on autoencoders, will be journey from the biginning (Of the speaker experience), mistakes and tips learned along the path showcasing several convolution flows.AITue 11:15 AM — 60 min
DAX - Time Series Analysis, Sequences, and some other fancy stuff
Here Time Series Analysis does not mean calculating the YTD value of a numeric value. This is about discovering the clients who have been buying 10 days in a row and how concepts of time series analysis can be used with DAX. These are the requirements to have fun attending this session: a basic understanding of the Filter Context is required, this article has to be read at least once https://mdxdax.blogspot.com/2011/03/logic-behind-magic-of-dax-cross-table.html, and it's also helpful to have a good understanding of extended tables.ExpertTue 11:15 AM — 60 min
Find and Seek: Leveraging visual & functional best practices for actionable Power BI reports
Three seconds. When a user arrives at a Power BI report, that’s how long you have to seize their attention with the most critical facts. As they mouse-over the report, they search for their most important categories and trends, then their details-on-demand… do they quickly find answers to their questions, or do they drown in the data of yet another report? The answer isn’t solely determined by elegant and performant visuals or deep AI and analytics; Power BI developers need to first know their users, then design a holistically effective, logical user experience. This experience, known as the Information-Seeking Mantra – or the 3/30/300 rule - leverages visual best practices to split the report into three parts: The information gained by the user in (A) three seconds, (B) thirty seconds and (C) three hundred seconds. In the deeper layers, interactivity in Power BI enables powerful, self-driven question-and-answer data exploration. Effectively combining both these visual and functional elements in a user-oriented way can lead to powerhouse reporting that can revolutionize a workplace. With these reports, Business users can test their ideas and questions to get reliable answers, and, more importantly, use them to drive actions to get the desired results.
In this talk, we will explain three principles we employ in an effort to design quality, usable Power BI reports that allow business users to quickly answer both basic and complex questions in an action-oriented way. We see these principles as valuable best practice guidelines for (1) co-creating reports with users, as well as (2) visual and (3) functional guidelines for design. For (1), we will illustrate our typical approach to user-oriented design using examples. For (2), we will illustrate key data visualization principles and demonstrate how they are typically executed when designing a report in Power BI using the default visualization types. For (3) we will demonstrate our experience with effectively leveraging powerful interactions in Power BI like drillthrough, graphical tooltips, page navigation, dynamic measure & dimension selection, and more. In addition, (2) – (3) will include short technical 'recipes' on how to these can be implemented by yourself. Finally, we will put it all together with an example that we hope highlights how the whole is greater than the sum of its parts. By the end of the talk, we hope you will understand this approach to design, and the value it can provide in designing useful Power BI reports.NewcomerTue 11:15 AM — 60 min
How Databricks Delta overcomes your Data Lake challenges
With the rise of big data, data lakes became a popular choice for storing the data for a large number of organizations. Despite the pros of data lakes, a variety of challenges arises with the increased amount of data stored in one data lake. Delta Lake is an open-source storage layer from Spark which solves some of the biggest challenges of Data Lakes. This session explains why Delta Lake is becoming the new standard for data lakes. The session is supported with a demo showing how Delta Lake works in action solving some of the biggest data lake challenges.Business AnalyticsTue 11:15 AM — 60 min
Maintain a Database Project, and Continuous Delivery using Microsoft Data Tools in practical terms
A task seems to be easy. Maintenance a project of a database in the code repository, treat as master-version and do deployment evenly and frequently. Simple? Seemingly. The things become more complex as fast as a number of objects in database growing. While instead of one database, we have over a dozen. When databases have got the references to each other. And how about dictionary tables? Where to keep them and how to script? Additional issues are coming whilst we would like to control instance-level objects.
All these topics I will explain in the session focused on practical aspects of work with Microsoft Visual Studio Data Tools.DeveloperTue 11:15 AM — 60 min
Manage your Power BI resources with the Power BI CLI
The Power BI command-line interface (CLI) is a set of commands used to create and manage Power BI resources (e.g. reports, users and capacity). The CLI is designed to get you working quickly with Power BI, with an emphasis on automation.
Also, Power BI CLI capabilities make it easy to work with different programing languages and software environments as it is multiplatform (Linux, Windows, MacOS).
In this session I will introduce you to the core capabilities of the Power BI CLI and how to use it in combination together with Bash in an automation script that can be used with in a CI/CD pipeline.Power BITue 11:15 AM — 60 min
The Enterprise Cloud Journey Learning Curve
From the perspective of a DBA who had to work with the internal, fairly new, Cloud Competence Center of an Enterprise to get many 100's of TB's of data into Azure, a story about the learning curve corporations go through, when they suddenly find themselves without the IT department silo's with all their slow but proven procedures. This is a story about learning new skills, re-inventing the wheel separate from other departments doing the same, coporate policies not set in stone anymore but changing a couple of times per year. And of course some Fear and Doubt. Cloud Compentency Centers seem to go through phases and with them the whole customer journey, in this session we will name them and discuss what went well and what slowed us down or scared us. There will be a fair bit of Azure related tech talk but only to accompany the story, this is about the lessons learned.Go To CloudTue 11:15 AM — 60 min
Advanced Plan Explorer Usage for Tuning Execution Plans
In 2010, SentryOne gave the community a completely free execution plan analysis tool: Plan Explorer, that is widely used today. But many users only utilize basic features and only scratch the surface of its capabilities.
Join me to learn how you can go beyond the basics. This demo-heavy (and marketing free) session will teach you advanced use of Plan Explorer by exploring practical T-SQL anti-patterns.ExpertTue 1:15 PM — 60 min
Creating a Metadata Driven Orchestration Framework in Azure
Azure Data Factory is the undisputed PaaS resource within the Microsoft Cloud Platform for orchestrating our data workloads. With a growing set of 100+ Linked Service connections, combined with an array of control flow and data flow Activities there isn't much Data Factory cannot do in terms of solution management. That said, the service may still require the support of other Azure resources for the purposes of logging, compute and storage. In this session we will focus on exactly that point by extending our Data Factory with an Azure SQL Database and Azure Functions. The result is the ability to create a dynamic, flexible, metadata driven processing framework that complements our existing solution orchestrator. Furthermore, we will explore how to bootstrap multiple Data Factory's, design for cost with nearly free Consumption Plans and deliver an operational abstraction over our Data Factory worker pipelines without hiding any error details in layers of dynamic JSON. In addition, we'll explore how this framework now translates to Azure Synapse Orchestration pipelines.Business AnalyticsTue 1:15 PM — 60 min
Developing an emotion capturing app with Python on Azure
During this session, the audience would be walked through building an AI-powered web app using Python and Flask running on Microsoft Azure. This web app analyses faces using AI and stores information about the emotion of faces in the image, notifying the user if it detects sad faces multiple times. The web app also serves up a simple HTML page showing the data captured.
Once a picture has been successfully taken, a Web API - built in Python - will receive the picture, and analyze it using the Azure Face API from Azure Cognitive Services (an AI service that can recognize faces in images, as well as estimating the age of the face, if the person is smiling amongst other things).
The Web API will use this cognitive service to detect the emotion of all the faces. This will then be saved into a database called CosmosDB - a document database. These documents contain key/value pairs of data stored in a format called JSON. The API would also return a count of emotions, which the python app would use in asking wether the user is ‘Okay’ when the number of sad faces is greater than or equal to 3.
The app basically selects an emotion, and a user has to try their best to show that emotion on their face. Once they have their best emotion face on, they take a picture with a camera and the web game will check to see what emotion they are showing using the Azure Face API. If it is the same as the one they were asked to show, the app returns a success, and a failure, if otherwise.
This is a simple illustration around how this technology could be used as a self care app.AITue 1:15 PM — 60 min
Keep track of your data pipelines in Azure Data Factory with CI/CD
Companies are sometimes struggling to maintain all code in one place, and are developing manually on each environment.
This causes extra work, and is hard to maintain.
In this session we will walk you through the steps you need to take when you want to use CI/CD on your Azure Data Factory pipelines in multiple environments or on your Azure SQL Database.
As a prerequisite you should have a basic understanding of Azure Data Factory, Azure SQL Databases and Visual Studio.
After this session you’ll be able to automate the deploy of your ADF pipelines and SQL Databases, and keep track of the changes in the code repository of Azure DevOps.NewcomerTue 1:15 PM — 60 min
Migrating SSIS to the Cloud
Integration Services is a mature ETL product that has been around for more than a decade. A true alternative is still missing in the Azure cloud, but you can migrate your existing projects to Azure. In this session, we'll cover the options you have for your migration project and we'll go into detail on how we can run SSIS packages in Azure Data Factory.Go To CloudTue 1:15 PM — 60 min
Power BI security deep dive
Power BI contains all your important data so understanding how it is secured is important. In this deep dive session we will look at all the aspects of security of Power BI, from users authenticating, where and how your data is stored and we even look at how to leverage AAD security to add additional security.Power BITue 1:15 PM — 60 min
Single Statement, Many Changes: How One Statement Can Modify Multiple Tables
You can only insert, update, or delete from one table at a time. At least that’s what they tell us when we first learn to write SQL statements. However, that one statement could modify multiple tables, and we may or may not even realize it is happening.
In this session, we will examine how a single data manipulation (DML) statement could change data for many tables. We will approach these from two different angles: implicit database design & explicit SQL code and objects. Syntax, performance gains, and gotchas of these different methodologies will be discussed. Finally, we will explore often overlooked changes that occur further downstream as a result of our DML statement.
When you leave, you will understand and appreciate how a DML statement against one table affects not only that table but how it can have a ripple effect of changes throughout your entire database.DeveloperTue 1:15 PM — 60 min
SQL Server and Network Security
The network is often forgotten when securing SQL Server is completed. However, this is a primary attack vector which needs to be designed and configured properly to help add the layers of protection needed.
In this session we will explore the network architechture you should look to implement as well as how to leverage Operating System Firewalls as well as Azure Network security configurations. When combined this will add more depth to the defence of your SQL Server security and help you meet compliance and regulatory requirements.Data AdministrationTue 1:15 PM — 60 min
A dive into Azure Synapse Analytics
Azure Synapse Analytics just went into Public Preview, and will be with us in GA hopefully by the end of this year.
Based on the Analytics in a Day session material, I provide an overview on Synapse Analytics, the best practices we learned already at the Kohera Labs, and how you can benefit most from it.Go To CloudTue 2:30 PM — 60 min
Categorical Data Encoding for Regression Models
The session explores a dataset curated from a support ticketing system. The goal is to predict, with a high degree of certainty, how long the next support case will take so that the end user has a resolution timeline. However, many features of the dataset are categorical (i.e. non-numeric), making it impossible to be used in a regression model. Therefore, these features require a special type of processing called encoding, so that a suitable statistical representation of their values can be produced. Once the features are successfully encoded, the dataset can be efficiently processed by the regression model.
Five encoding techniques are explored: one-hot, target, hashing, binary and entity embeddings. We will compare every encoding approach by using standard neural network metrics. The way each encoding is set up will be explained in detail so that attendees will become familiar with the internal workings of each method.
The presentation is suitable for data analysts and scientists, BI practitioners and anyone interested in artificial neural networks and text encoding techniques. The definitions of the relevant neural network terminology will be provided in the introduction of the session, making the session also suitable for ambitious novices in neural networks and prediction models.NewcomerTue 2:30 PM — 60 min
Defeating Parameter Sniffing With Dynamic SQL
Parameter sniffing can be a huge performance problem, and fixing it can feel like an exercise in futility.
If you're ready to go beyond recompiles and unknowns and solve problems for real, come to this session.
I'll teach you techniques to fix parameter sniffing problems for good.DeveloperTue 2:30 PM — 60 min
Do You Speak English? Localized Reports with Power BI
Even when we live in a global world your end-users might expect to get their reports in their own local language. This talk is guiding you through the available options and necessary steps to give the report user control over the language:
• Content of textual columns
• Model (names of tables, columns and measures)
• Power BI Desktop and Power BI service
You will learn to extend Power BI’s data model to allow for multi-language support of column content and headlines (and how you can automate the translation of the texts with Azure Cognitive Services). I will show you how you can implement currency conversion and how to translate the model’s meta data. Finally, we look at how to change the language in Power BI Desktop in in Power service.Power BITue 2:30 PM — 60 min
Five steps to solving any architecture problem on Azure
Architecture is all about problem solving. It is like a puzzle where you have to find the pieces that fit together to form the right picture. Except on Azure, the puzzle is multidimensional, the pieces keep changing, and there are more than one way to solve the puzzle. So you really need strategies for how to solve the puzzle in an optimal way AND keep it from falling apart in the future. Scary? Exciting? Yes!
In this session you will learn how to approach architecting a solution on Azure and move step by step from a goal to the details of the solution from the perspective of a data platform example use case. Having a systematic approach allows you to choose the best solution for the environment, understand possible restrictions of the solution and helps you understand the impact of the changing Azure services in your environment better.Data AdministrationTue 2:30 PM — 60 min
Interpret-Text: Building trust and power with explainability
Artificial intelligence (AI) systems have a growing impact on people's lives on an every-day-level, thus it is fundamental to protect, control and understand the behavior and complexity of different models. Interpretability assists data scientists to explain, debug and validate their models, thus helping to build trust towards the model.
Interpret-Text, the innovative interpretability technique for Natural Language Processing (NLP) models has been developed by the community, and just has been announced at Microsoft Build 2020.
During this session, attendees will get an understanding about how to explain their models and then how to build a visualization dashboard that provides insights into their data, using different state-of-the-art explainers.ExpertTue 2:30 PM — 60 min
Taking Models to the Next Level with Azure Machine Learning
Artificial Intelligence and Machine Learning can be used in many ways to increase productivity of business processes and gather meaningful insights, by analyzing images, texts and trends within unstructured flows of data. While many tasks can be solved using existing models, in some cases it is also required to train your own model for more specific tasks, or for increased accuracy.
In this session, we will explore the complete path of integrating text analysis intelligent services into the business processes of Tailwind Traders, starting from pre-build models available as cognitive services, up to training a third-party neural custom model for Aspect-Based Sentiment Analysis available as part of Intel NLP Architect using Azure Machine Learning Service. We will talk about cases when one needs a custom model, and demonstrate quick ways to create such a model from scratch using AutoML, and show how to fine-tune model hyperparameters using HyperDriveAITue 2:30 PM — 60 min
The Untruthful Art - Five Ways of Misrepresenting Data
In this age of information it is imperative to be able to be able to clearly, simply and accurately explain and communicate sometimes complex data.
Right at the intersection of journalism and statistics lies visual communication - the art of using data, charts and maps to convey information. This has been called "the truthful art" as it is an extremely powerful tool that must be wielded with the utmost care.
Just like any other tools, however, it can be used for sinister purposes. This session is all about exploring ways that data can be misrepresented to further a goal - while still seem perfectly reasonable. It is intended as an eye opener for anyone interested in the how and why of data deception. We will walk through five stories and explore where the trickery is hidden, discuss a more honest way of communicate the information as well as look at ways to spot potential foul play with data.Business AnalyticsTue 2:30 PM — 60 min
Azure Arc Data Services Fundamentals
Azure Arc Data Services, is a solution enabling you to deploy managed Azure Data Services to any hardware running Kubernetes on-premises or on any cloud. In this session, you will learn Azure Arc Data Services architecture and how to deploy and manage Azure Data Services.Go To CloudTue 3:45 PM — 60 min
Azure Synapse SQL Pools : Everything you need to know / Tips and Tricks
Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources – at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage and serve data for immediate BI and machine learning needs.
In this session we focus on the SQL Pool (Previously known as Azure SQL Data Warehouse).
We will cover the high level architecture, setting up, table design, queries and monitoring.
We can cover everything in huge detail, but you should have enough information to get you started and avoid the common mistakes and pitfalls.Business AnalyticsTue 3:45 PM — 60 min
Databricks, DeltaLake and You
Data Lakes & Parquet are a match made in heaven, but they’re cranked up to overdrive with the new features of Delta Lake. Available as the open source Delta Lake, or the premium Databricks Delta. This session will take a deeper look at why parquet is so good for analytics, but also highlight some of the problems that you’ll face when using immutable columnstore files.
We’ll then switch over to Databricks Delta, which takes parquet to the next level with a whole host of features – we’ll be looking at incremental merges, transactional consistency, temporal rollbacks, file optimisation and some deep and dirty performance tuning with partitioning and Z-ordering.
If you’re planning, currently building, or looking after a Data Lake with Spark currently and want to get to the next level of performance and functionality, this session is for you. Never heard of parquet or delta? You’re about to learn a whole lot more!ExpertTue 3:45 PM — 60 min
Notebooks for Triage and Incident Response: A Tale of DBA Heroism
In this session you will learn how to leverage Azure Data Studio and notebooks to monitor and diagnose SQL Server and Azure SQL instances. Cornerstone tools such as extended events, PowerShell, and dynamic management views (used by DBAs and Azure CSS alike) can be combined with the power of Azure Data Studio and T-SQL notebooks to facilitate managing your data estate. Modernize your troubleshooting / incident response guides to executable and automatable guides, for efficient root cause analysis, mitigation and auditing!Data AdministrationTue 3:45 PM — 60 min
Query Tuning Internals for the Advanced SQL Developer
Skilled SQL developers know that the SQL Server query optimizer uses a multi-step process to produce execution plans. But what about deeper components like the parser, the binder, the algebrizer, as well as the optimizer itself? This session will teach you advanced techniques for query tuning as well as surprising behaviors of the query optimization process that can have a dramatic impact on performance, with special attention paid to the processes controlled by the algebrizer, including associative, commutative, and transitive transformations. We will examine a variety of everyday queries whose performance can be greatly improved by apply a deeper understanding of these internal behaviors. Lots of examples and demos!
• Goal 1: Learn advanced and undocumented methods to see the steps of parsing, binding, algebrizing, and query optimization.
• Goal 2: Explore the SQL Server internal memo structure to see how SQL Server uses the heuristics of the algebrizer and query optimizer.
• Goal 3: Walk through several examples of SQL queries whose behavior can be greatly improved when you apply what you’ve learned about the algebrizer and query optimizer.DeveloperTue 3:45 PM — 60 min
Top 10 Essential Actions for Governing Power BI
In this session we will discuss the high-level goals for a well-governed Power BI environment, and why governance improves user experience when it's approached with user empowerment in mind. Ten practical, actionable suggestions for improving your Power BI implementation will be shared which you can use right away.Power BITue 3:45 PM — 60 min
Using Azure ML to disrupt quality inspection with deep neural networks
Currently, quality inspection is based on quality rules which are not scalable to new product (lines).
Introducing a new product (line) is expensive because industrial camera’s and monitors have to be set up and new quality rules have to be developed.
In this session we propose a cheaper, more scalable way to do quality inspection by using portable device cameras and computer vision techniques which can learn quality rules.AITue 3:45 PM — 60 min
Unlocking the power of data
Today’s business environment is evolving at an unprecedented rate. Keeping pace requires every company in every industry to become a data-driven organization, and to derive actionable insights.
Our mission at Microsoft is to help accelerate customers in their digital transformation journeys. Join Rohan as he shares the limitless possibilities of our Azure Data platform to unlock new opportunities for our customers – and enable them to make sense of their data responsibly and at any scale.PlenaryTue 5:00 PM — 45 min