In this full-day seminar, you will gain core knowledge about how SQL Server works, along with a host of information covering memory usage, the optimizer, indexes, statistics, and table design.
Most importantly, you will be exposed to an elaborate and detailed tour of T-SQL patterns and anti-patterns, many of which will enlighten or surprise you. You will also learn about default settings to override, trace flags that can help query analysis, optimizing tempdb, and how to avoid “it worked on my machine” syndrome.
Packed with sample code, live demos, and even pop quizzes, you will leave the room armed with many new insights into making your design phase more robust, your queries more efficient, and making life easier for your DBAs, fellow developers, and end users.
Advanced data analytics does not live only in the realm of data scientists. The necessary tools now exist in SQL Server 2016 to perform advanced analytical analysis with R and Power BI. Starting with SQL Server 2017, Python can also be part of a SQL Server deployment solution, so we will review that as well. Attendees will see how to apply these techniques to analyze a database server.
- Learn how to code applications in R to provide data insight and data visualizations for use within SQL Server.
- Discover some of the new functionality of SQL Server 2017 by acquiring the knowledge of how to incorporate Python code
- Develop and understanding of some of the algorithms used in data science and how to apply them.
- Extend the functionality of SQL Server by integrating R code to provide insight into the performance of SQL Server.
- Understand different ways to visualize the results, including storing within SQL Server, creating SSRS reports and visualizing in Power BI.
- Introduction to Data Science concepts.
- Application of Data Science algorithms in R.
- SQL Server R Internals and Integration.
- Modifying SQL Server to optimally perform and monitor R Code.
The availability and storage landscape for SQL Server has changed in the past few years. Not only are the options and combinations for deploying SQL Server instances greater than in the past, but we are also moving towards a software defined world. There are many new capabilities both in SQL Server itself as well as in the underlying platform which can help you increase not only availability of your instances and databases, but some will also increase performance and give you more flexibility.
These features can be cornerstones of advanced SQL Server architectures which need to be planned and deployed correctly. If you want to stay ahead of the curve and take advantage of what is available – or get a glimpse of what is coming – you need to attend this full day session to learn about things like:
- Distributed Availability Groups
- Domain Independent Availability Groups
- In-place WSFC upgrades
- Scale Out File Server
- Storage Spaces Direct
- Storage Replica
- SQL Server v.Next on Linux and its availability story
- Quorum configuration for multi-site AGs and FCIs
- Tips, tricks, and gotchas not only for on premises, but also for virtualization and public cloud deployments
Outside of anything in v.Next, everything taught and demonstrated can be used right now. Register to take advantage of these new paradigms and incorporate them into your mission critical SQL Server architectures today. Note that this session assumes you know the basics of Windows Server Failover Clusters, Availability Groups, and Failover Cluster Instances and will not cover fundamentals of availability or storage.Allan Hirt Precon
If you have wanted your head to hurt with Power BI, this session is for you! You will learn parts of Power BI to help you take full advantage of the reports and usage in the service. From creating reports to deployment and maintaining fresh data. The debugger will also make an appearance!
During the pre-con, we will look at the following areas.
- M Syntax within Excel and Power BI Desktop
- Modeling and DAX
- How Power BI Premium fits in
- Data Refresh and Gateway usage
- Implementing Row-Level Security
- Embedded, Custom Visuals, and the APIs
What's a full day without time for your questions also!Adam Saxton Precon 400
You've heard a lot about PowerShell recently because a lot has changed. Now, you don't need to be a coder to get things done; you can lean on hundreds of community-created commands that solve many of the problems we all share.
In development and need a nightly refresh? Architecting and need to find duplicate indexes fast and easy? Putting on your BI hat and need quick importing and exporting of data? Join us and we'll supply a hands-on lab for your laptop where you can experience PowerShell’s realized potential, crafted by both Microsoft and the SQL community.
Whether you need a prepackaged solution or the building blocks to roll your own fix, you will leave with awesome tools to manage your most annoying problems. And maybe - hopefully - you'll even be confident enough to contribute your own solutions to share with the community.Chrissy Lemaire Klaas Vandenberghe Rob Sewell Precon
SQL Server 2017 adds Python and Graph databases as data scientist toolkits embedded into SQL Server. These combined with the existing features of R Services and R Server make SQL Server a production-ready data scientist environment. In this talk, Bart & Breght make their assessment – as data scientists with limited SQL Server experience – on SQL Server as a data scientist tool. Naturally we’ll evaluate the link with AzureML, Azure HDInsights and CNTK, the open-source deep-learning toolkit from Microsoft.Bart Van Der Vurst 100
Everyone is talking about advanced analytics or data science these days and many companies are interested in taking their first steps in these new fields of data analytics. But how do you get started with new techniques like machine learning? What kind of new hardware or software do you need to buy to get started and how do you get your IT department to implement and support those choices?
Before you decide on designing an advanced analytics solution on-premises, why not give it a go on a platform that only charges money for the time you are actually using it? Azure Machine learning is a cloud service that enables you to easily build, deploy and share analytics solutions with all the flexibility of the cloud. The perfect platform to start with advanced analytics without having to invest a lot of money!
In this session we are going to take a close look at AzureML, from how you can build your first machine learning model, to connecting AzureML directly to your on-premises SQL Server databases. Don’t worry if you are not a data scientist. This session will be free of heavy statistics and math and will focus on helping you take those first steps in advanced analytics!
Building robust and resilient Data Platform solutions can be complex and costly, building out for both High Availability and Disaster Recovery. The Always On technologies incorporated in the Microsoft SQL Server stack help Data Platform Engineers to do just this. Creating solutions that span multiple locations, in an effort to protect the data that we are tasked to manage on a daily basis.
Together we will walk through the architecture patterns, technology requirements, and configuration options that you need to know in order to build a solid Data Platform. Understanding the implementation differences between on-premises and cloud based deployments is important. Especially as there are specific requirements that need to be met for up-time SLAs to be applicable, as well as mitigating for region failures.
All of these elements mean that it is more than just simple wizards to follow in order to have a solid, reliable, and robust SQL Server Data Platform.
Have you ever wanted an overview of your cubes and their dimensions, attributes and measures?
This session will show you how you can use Analysis Services DMVs to extract information about your cubes and collect it for later use. We will then see how you can report on top of it thus documenting your SSAS cubes. We will then hear about different scenarios where this data might be useful.
By attending this session, you will learn how to extract, transform and load the data into a database using SSIS. You will learn how you can create reports that show you the relevant information for different scenarios and how my company is making use of these information for different purposes. Some of the use cases for this data can be aligning dimensions across cubes or creating a bus matrix for documentation purposes.
At the end of the session the attendees leave with all the code needed to start on their own as well as the knowledge on how to extend and further the solution for their own company or clients.
In order to ascertain the abilities of cloud computing platform, let us overview what is available & offered on Microsoft Azure.
Microsoft Azure has the ability to move, store and analyze data within the cloud. It is essential to evaluate multiple opportunities and options with Microsoft Azure data insights. In this session let us talk about strategies on data storage, data partitioning and availability options with Azure. A tour on how best these Azure components can help you achieve success for your Big Data platform.
Columnstore was introduced in SQL Server 2012, and improved upon in the following releases.
SQL Server 2016 also saw the introduction of the Operational Analytics feature.
Columnstore has grown into maturity, and people are really starting to put it to use.
But is Columnstore allways the solution to your performance issues?
Are there situations where columnstore isn't the answer?
Join me for a look at when to use and when to not use columnstore in a SQL Server 2016 enviroment. We'll be looking at both traditional non-clustered and clustered columnstore as well as the Operational Analytics role. We will also have a look at using In-Memory OLTP in your DWH enviroment.
In this session we take a look and a deep dive into Aure Data Lake, a new feature on Azure that will enable amost anyone to work with BIG DATA. It combines C# and SQL to a language that is way more intuitive than what we are used to. Azure Data Lake Store, our new repository for data of various origins – we can collect, store and share data from this lake as we see fit. Azure Data Lake Analytics, a new way to scale and use your analytics on AZURE and BIG Data, it introduces U-SQL a new language combined of C# and T-SQL to make the task of analyzing BIG Data easier and more comprehendible. Azure Data Lake Tools for Visual Studio, provide an integrated development environment that spans the Azure Data Lake, dramatically simplifying authoring, debugging and optimization for processing and analytics at any scale. After this session, you will have an understanding of the new feature, and will hopefully be inspired to use it either in a Proof Of Concept or in a Production scenario.
The newest addition to Data Lake Analytics is execution of R, Pytnon and Cognitive Services, parts that this session will focus on
DevOps (Developer Operations) is defined as a development process that emphasizes communication and collaboration between product management, software development, and operations, automating the process of solution integration, testing, deployment and infrastructure changes.
All projects involving on-premises software and hardware benefits from DevOps. But hybrid and cloud-only architectures require it – there are so many variables to even setting up the architecture that a dedicated team or developer is often required. And Advanced Analytics projects involving Data Science have even more requirements that are specific to these tasks.
This session explains the processes, procedures, platform options that you have available, along with explaining the tools you need to use to create an effective DevOps solution for your Advanced Analytics projects.
Buck Woody 300
How to move the classic SSIS packages to the cloud for the ETL process? Azure offers Data Factory, Runbooks, Logic App, or Functions. What is hidden behind the individual services and what can I do with them? The examples here show how these components can be assembled to manage a DWH's management in the cloud.Alexander Klein 100
Geocoding is the process of converting addresses into geographic coordinates (latitude and longitude).
Geographic coordinates can then be converted into SQL Server spatial data type and stored in a database.
In this session we will learn:
- how to retrieve coordinates from an address, using Google Maps API and Bing API;
- how to get a free license from API services;
- how to automate the process and convert coordinates into a geography data type using SQL Server Integration Services;
- how to store the data into a database;
- how to perform calculation on spatial data;
- how to represent data on a map using Reporting Services and Power BI.
This session presents an overview of Azure Cosmos DB, a globally distributed, massively scalable, low (single-digit millisecond) latency, fully managed NoSQL database service that is designed specifically for modern web and mobile applications. Like other NoSQL platforms, Cosmos DB supports a schema-free data model, built-in partitioning for sustained heavy-write ingestion, and replication for high availability. But only Cosmos DB offers turnkey global distribution, automatic indexing, and SLAs for guarantees on 99.99% availability, throughput, latency, and consistency.
We begin by explaining NoSQL databases in general, and how they compare with traditional relational database platforms. Then we tour the many features of Cosmos DB, including its multi-model capabilities which allow you to store and query schema-free JSON documents (using either DocumentDB or MongoDB APIs), graphs (Gremlin API), and key/value entities (tables API).
You’ll learn about global distribution, scale-out partitioning, tunable consistency, custom indexing, attachments, and more. We’ll also explore client development using the many available SDKs and APIs. Attend this session, and get up to speed on Cosmos DB today!Leonard Lobel
Have you ever looked at an execution plan that performs a join between 2 tables, and you have wondered what a "Left Anti Semi Join" is? Joining 2 tables in SQL Server isn't the easiest part! Join me in this session where we will deep dive into how join processing happens in SQL Server.
In the first step we lay out the foundation of logical join processing. We will also further deep dive into physical join processing in the execution plan, where we will also see the "Left Anti Semi Join".
After attending this session you are well prepared to understand the various join techniques used by SQL Server. Interpreting joins from an execution plan is now the easiest part for you.Klaus Aschenbrenner
You know how to use Power BI or Power Query to load data from different sources but you want to go further than limiting yourself to the UI? You’re tired to look at auto-generated code and don’t understand the key concepts of this powerful ETL language? Then, this session is for you. During 60 minutes, you’ll learn how to write code in this functional language. Starting with syntax, primitive and structured values; we’ll quickly jump to the creation of expressions and functions and the usage of some recurrent patterns. This session contains many demos and real-life use-cases that you’ll be able to transpose to your own projects.Cédric Charlier 200
A lot of changes have come to Power BI. Do you understand how Premium fits in? What is an app workspace and why would you use it?
And, what the heck is a capacity? We will take a look at Premium, Apps and App Workspaces along with the changes to embedding.
We will also see how Power BI Report Server fits into the picture.Adam Saxton 100
We hear a lot about Machine Learning, Artificial Intelligence, and Deep Learning. What is it, and how can you apply it to real-world problems?
Buck Woody from the Machine Learning and Data Science team at Microsoft will explain these concepts and walk through a real-world demo of Microsoft products and services that can help you implement this new technology.Buck Woody
PowerApps and Power BI are related, but what can we do with them and how can we let them work together? In this session we will explore some options... Dare we dream of real writeback possibilities?Ken Geeraerts 200
Extended Events provide deep insight into SQL Server's behavior and allow us to gather information not available by other means. However, compared to other technologies such as SQL Trace and Event Notifications, a way to react to the events as soon as they happen seems to be lacking.
In this session we will see how the Extended Events streaming API can be used to process events in a near real-time fashion. We will demonstrate how this technology enables new possibilities to solve real world problems, such as capturing and notifying deadlocks or blocking sessions.
With the release of SQL Server 2017, Integration Services received some love with the addition of its very own Scale Out feature.
In this session we'll see what kind of impact horizontal scaling has on your ETL's performance and how it performs compared to vertical scaling.
Introducing the new scripting language for tabular models. Before SQL Server 2016 tabular models was wrapped into a multidimensional constructs. TMSL is the new native language for tabular which is build on JSON - this makes it easy to understand, modify and deploy. During this session I will go through and explain some examples on generating a SSAS tabular model by using the new TSML.
I will spend some time showing and about explaining a real world example on pushing measure creation to the key business stakeholders to ensure quick time to market.
The last thing i will show is how you speed up your development free up up to 50% of the time you spend building tabular models with the simple an advanced features of Tabular Editor 2.0Bent Pedersen
When I read that Microsoft have added graph data to SQL Server 2017 I was intrigued as to what graph data is so I started doing some research.
This presentation is the culmination of my investigations.
If you design complex OLTP relational databases or have data that doesn't fit the rigid hierarchy of a relational database then this talk is for you.
You may be in for a surprise!
Some of the questions we will look at:
What is Graph Data?
Who uses it?
What is it used for?
How does it compare to traditional relational database design?
What other companies support graph databases?
How does it work in SQL 2017?
Is there a new language to learn?
What is the so-called Kevin Bacon problem?
Will it replace traditional relational database design within the next 10 years?
Is this real?? This was the first thing I asked myself when I heard the news. SQL Server running on a Linux box!!
Yes it is! I will show you this.
I will walk you through using SQL server on Linux. Go for the easy setup if you want to start with Docker, go for the full blown setup and use compatible linux distribution.
Come and sit on the dock of the linux bay for some fun with SQL on Linux.
Psst, My linux machines will be clustered with an Availability group.Frederik Bogaerts 200
Availability groups (AGs) and failover cluster instances (FCIs) are designed to increase availability for your SQL Server implementations. However, if you run into issues, you may experience unplanned downtime. How do you approach diagnosing and hopefully solving the problem(s) you encounter? This session will cover the most common troubleshooting tips to not only deal with issues after you are live in production, but also be proactive to prevent problems before going live based nearly 20 years of clustered SQL Server implementations.
Whether you are using SQL Server on Windows Server or the upcoming SQL Server 2017 on Linux; AGs, FCIs, or both, this session provides key advice to ensure that your highly available SQL Server instances and databases remain that way.Allan Hirt
Azure SQL Database becomes more and more interesting for data professionals. The migration process of an on-premises database to an Azure SQL Database can be quite challenging. Learn how to migrate your schema and data into Azure SQL Database!Pieter Vanhove 200
Get a thorough tour of the new T-SQL features in SQL Server 2016, SQL Server 2017, and Azure SQL Database.
We'll talk about why new functionality was introduced, the old syntax they replace, how they work, and we'll dig into a few features to show when you should use them and when you shouldn't.Aaron Bertrand 200
Sometimes things don’t work out as planned. The same thing happens to our SQL Server execution plans. This can lead to horrible slow queries, or even queries failing to run at all.
In this session you will see some scenarios demonstrated where SQL Server produces a wrong plan, you will learn how to identify them and what you can do to avoid them.
You will also learn more on Adaptive Query Processing, a new feature in SQL Server 2017. This allows your SQL Server to adjust wrong plans while the plan is being executed. So, if running queries performantly is one of your concerns, don’t miss out on this session!Nico Jacobs 300