SQL_logo

Sponsored by:

nutanix_logo

INTERACTIVE WHITEPAPER

SQL 2016 and Hyperconverged Infrastructure:

Migration, Deployment and Optimization Best Practices

by Bill Kleyman

For today’s enterprises, data has become the life blood of the business. Furthermore, the amount of data is ever increasing with many new data sources.

Very recently, Microsoft SQL Server 2016 introduced several improvements to accommodate the challenges of storing and processing these large amounts of data.

For example, the new SQL Server 2016 architecture provides up to 10X compression of data utilizing columnstore index technologies. However, the infrastructure that runs your SQL Server 2016 environment needs to also be up to the task of these new demands by providing the right balance of performance, availability and resource optimization. Installing the fastest processors without considering the solution as a whole —storage, management, upgrades, protection, etc. — can lead to disappointing long-term results. This becomes especially apparent when you need to scale.

It is becoming ever more important to be able to deploy a SQL 2016 environment, or other SQL version upgrades as you progress towards 2016, on the right type of infrastructure. In this whitepaper, we’ll discuss what’s new in SQL 2016, planning and preparing a migration, and some challenges to be aware of as you build your plans. From there, we’ll discuss what to look for to ensure SQL can handle your organization’s demands and some best practices as they pertain to hyperconverged infrastructure, resiliency, and availability. Furthermore, we’ll examine the evolution of hyperconvergence to an enterprise cloud and what that approach means for how you run SQL.

SQL Server 2016

What’s New and What’s Great

As we look at the SQL Server platform over the past few years, it becomes clear that SQL Server 2016 is one of the biggest leaps forward in the Microsoft data platform. SQL 2016 gives admins and the business real time insights across your transactional and analytical data with a scalable database platform that has everything built in. This means leveraging in-memory performance, new security innovations and high availability. Also, organizations are able to utilize advanced analytics that help make mission-critical applications intelligent.

Beyond some of the big updates, there are a few core features to keep an eye on:

Polybase. Microsoft had introduced Polybase, a SQL Server connector to Hadoop (and Azure Blob Storage) to its data warehouse appliance, Analytics Platform System, in 2015. But now Microsoft has incorporated that functionality into the regular on-premises product. This feature will benefit you if your regular data processing involves dealing with a lot of large text files – they can be stored in Azure Blob Storage or Hadoop, and queried as if they were database tables. It’s important to work with a virtualization and data center solution that can support this type of extension, a great area where HCI can help.

90%

JSON Support. In addition to supporting direct querying to Hadoop, SQL Server 2016 adds support for a key component of Web applications: Java Script Object Notation (JSON).

80%

Row Level Security. This restricts which users can view what data in a table, based on a function. SQL Server 2016 introduces this feature, which is very useful in multi-tenant environments where you may want to limit data access based on customer ID.

70%

Always Encrypted. SQL Server has long supported both column-level encryption, encryption at rest, and encryption in transit. Always Encrypted is new functionality through the use of an enhanced client library at the application so the data stays encrypted in transit, at rest and while it is alive in the database.

60%

In-Memory Enhancements. SQL Server 2014 introduced the concept of in-memory tables. In SQL Server 2016, this feature is vastly improved, supporting foreign keys, check and unique constraints and parallelism. Additionally, tables up to 2TB are now supported (up from 256GB). Column store indexes, which are commonly used in data warehouse workloads, are also part of in-memory.

50%

SQL Server 2016

Migrating and Deploying

SQL Server is often seen as a critical part of the business.

That’s why deploying or migrating to the technology requires care, planning, and the right infrastructure. The process of planning and executing SQL Server migrations can be complex and risk-prone. This is a case where the right approach and implementation of best practices can help you avoid the pitfalls of migration, minimize associated risk, and complete the migration more quickly. Too often, we think of SQL Server simply as a database engine while it is actually an entire data platform supporting data integration, data warehousing, analytics, and reporting. Let’s examine five best practices when deploying or migrating to SQL Server 2016.

1

Start smart and plan ahead.

The idea is simple: Discovery of your environment will be the key to success. That means understanding your applications, what they require, what types of hardware you have right now, and where your SQL ecosystem will help your business succeed. If you have SQL already, start with an inventory of all servers running SQL Server. The Microsoft Assessment and Planning (MAP) toolkit can help identify all instances of SQL Server deployed in the enterprise with specific versions of SQL Server running on each server. There are other tools out there too, like Nutanix’s Xtract for Databases (formerly called SQL Server Mobility Service), which scans existing environments for sizing and migration purposes.

From there, make sure you clearly understand all the SQL Server features that are currently implemented, including the business intelligence stack, high availability (HA), disaster recovery (DR), and others. We’ll discuss HA and availability in an upcoming section. Finally, don’t be afraid to leverage powerful migration tools. Microsoft SQL Server Upgrade Advisor, for example, can identify issues that may have an impact on upgrades to more recent versions of SQL Server.

2.

Understand dependencies and map them out.

Even if you don’t have SQL currently installed, it’s critical to map out your applications, data points, and even user groups. What do they require?Are there application dependencies to consider? If you already have SQL, develop a comprehensive inventory of SQL Server database dependencies and workload characteristics, such as OLTP. Migration and consolidation includes more than just databases. There are a number of dependencies such as SQL Server logins, SQL agent jobs, linked servers, and other configurations that must be part of the plan for migration and upgrade. Plan time and resources to catalog and map applications to databases. Scripts can be used to determine the login name and machine name for connections being made to individual databases. During this process, you should also keep an eye on third-party applications accessing your SQL ecosystem as well as legacy and custom data solutions.

3.

Plan out costs and licensing.

The costs of migration can increase quickly as you consider SQL Server licensing costs. Licensing is charged per CPU in SQL 2000, 2005 and 2008. The licensing model changes to per core in SQL 2012 and later versions, emphasizing the need to evaluate workloads at the CPU level. Without proper planning, licensing costs could increase significantly. Virtualizing SQL Server can help to right size instances to help with licensing costs. Additionally, licensing for maximum virtualization can help by consolidating multiple instances where CPU oversubscription is possible.

4.

Data protection and uptime.

Plan to evaluate and standardize high availability and disaster recovery solutions based on the target infrastructure and SQL Server version you intend to implement. Consider alternatives to Microsoft SQL Server Clustering such as SQL Server AlwaysOn Availability Groups (make sure to check the challenges below) in SQL Server 2012 and later versions, as well as VM host clustering and physical infrastructure DR solutions.

Aside from SQL failover best practices, it’s critical to involve your infrastructure in this as well. In many instances, deploying SQL 2016 on a hyperconverged infrastructure has a lot of significant benefits; especially when we look at high availability, redundancy, and resiliency. (We’ll discuss hardware considerations later in this paper.)

5.

Keep an eye on the entire data platform and always test.

Depending on your use-case and deployment, you may need to spend extra time planning out other aspects of your SQL ecosystem. This might mean looking at BI solutions, whether you’re doing analytics, or if you have specific reporting services. Finally, a key consideration point is testing and validation. Plan for application testing and leave enough time to adequately prepare and execute. Remember, the end goal is to deploy a modern, cost-efficient SQL infrastructure with validation of processes, tools, and procedures to successfully migrate the SQL Server data platform.

SQL Server 2016

Avoiding 5 Key Challenges

As powerful as SQL 2016 has been, there are some challenges to be aware of. Consider the following:

1. AlwaysOn
1. AlwaysOn
This is a great feature to help with load balancing as well as read/write splitting. However, there are some things to be aware of. In some cases, this feature will require specific application changes, so be ready for that. Furthermore, this may require a bit of additional app-dev work.
2. Replication Load Balancing.
2. Replication Load Balancing.
This is a great feature to help with load balancing as well as read/write splitting. However, there are some things to be aware of. In some cases, this feature will require specific application changes, so be ready for that. Furthermore, this may require a bit of additional app-dev work.
3. Controlling Licensing Costs.
3. Controlling Licensing Costs.
We discussed this earlier, but planning is key here. You must plan out your primary and secondary SQL environments to ensure that you don’t pay more per core than you need to.
4. Cluster-level Control and Visibility.
4. Cluster-level Control and Visibility.
Multi-server SQL architectures can get complex pretty quickly. Ensure that you have real-time visibility and good analytics. In some cases, working with your underlying HCI solution can help with enhanced metrics and comprehensive SQL monitoring capabilities.
5. Downtime and Failover.
5. Downtime and Failover.
This is really where your HCI hardware vendor can help. If you don’t have a good setup, critical components won’t failover beyond one data center. This is why integrating with a good hyperconverged solution, spanning multiple data centers, can help create a powerful HA SQL environment.

SQL Server 2016

Deploying Hyperconvergence (HCI) and Data Center Best Practices

Intelligent hyperconverged infrastructures (HCI) create market leaders and help organizations stand out.

Not just that, they also create a truly agile business ecosystem. Being able to respond to market shifts and user demands is critical to stay ahead. Hyper-converged infrastructure solutions support the latest virtualization, cloud, and business solutions. Hyperconverged technologies continue to evolve and form the core for the next iterations of this approach in the form of enterprise clouds, adding things like provisioning, automation, self-service and other public cloud-like features for on-premises deployments. This means that applications are delivered faster, users have better access to resources, they are simpler to manage than traditional approaches and the business becomes a market leader creating new kinds of solutions.

This also makes an enterprise cloud built on HCI a great solution for SQL infrastructure. This is because HCI can create a powerful balance between high availability and performance.

Consider the following: HCI and HA.

There are a few key factors to consider when deploying HCI and a SQL ecosystem. First of all, HCI is designed with redundancy built in. This directly translates to a healthier SQL infrastructure.

Here are the big benefits around high availability:

Power, Disk, Device. Your SQL architecture will need to have, at minimum, dual power supplies connected to an uninterruptible power supply. HCI offers built-in power redundancy, standard in most cases, to ensure maximum uptime.

Redundant hardware and components. Just like power, you need to have some sort of disk redundancy as well. There are a few types of RAID options; however, you should always have hot spares when using RAID. The beauty around redundancy and HCI is that data and disk are always protected. Furthermore, HCI makes backups easier by integrating natively with backup and recovery solutions. Here’s the other important point – With HCI, “RAID” configurations can, effectively, be a thing of the past. Traditional RAID configurations work with a specific number of drives, configured in sets which allow for RAID1, RAID5, RAID6, and so on. A distributed HCI system, like Nutanix, is not tied to a number of drives, number of drives per node, or even number of nodes in a single cluster. A good HCI design will also provide node level resiliency and block (multiple nodes sharing components, like power) level resiliency. All of this without the need of hot spares.

Clustering capabilities. First of all, you should configure SQL Failover Cluster Instance running on a Windows Server Cluster. However, beyond Windows and SQL clusters, creating hardware clusters and failover groups is important. Working with HCI allows you to create efficient clusters in multiple datacenters. Furthermore, you can control these clusters from a central management platform. Finally, working with hypervisor-level clustering must be considered as well. In these instances, you’re able to provide high availability for standalone SQL instances. Working with an intelligent HCI solution – you’re actually able to create clusters with heterogenous configurations – which allow a mix of memory, cpu and storage configurations within the same cluster.

Supporting multiple copies of database instances. This is a great way to provide additional high availability capabilities. Working with HCI, you can create clusters, as well as have standby databases where required. This means you can have cold, warm, and even hot standby copies that can be made available with little-to-no interaction from the admin.

HCI and Performance.

A huge benefit of HCI is the built-in capability around performance. This means that core components, like compute and storage, are already optimized for performance. This helps impact everything from Disk IO to CPU and memory.

Consider these benefits:

Disk IO. This factor is absolutely critical to SQL performance. There are three places that disk IO can impact performance: Data Files, Log Files and TempDB files. This is why it’s critical to have not only the right type of configuration, but the right type of disk architecture in place as well. HCI can offer hybrid solutions where SSD is thrown into the mix for better performance as well as all flash nodes.

Memory. SQL uses RAM to cache data pages. So, the more RAM, the better for performance. HCI comes prepared with RAM capacity and can scale for growth.

CPU. Index rebuilds and a heavy amount of seeks can cause CPU bottlenecks. Make sure to plan and design your SQL platform where this won’t be an issue.

Network IO. When working with SQL, there are three different types of traffic to consider (among others): data coming in, data going out, and management data. HCI has direct integration with networking and switching capabilities; allowing you to easily create network segments where data interference is kept at a minimum. When data is local to the application, residing on the same node, then you will get better network performance and efficiency. Other copies can be staged nearby, just a hop away if needed.

Management. Visibility into your SQL environment is critical. Although there are native monitoring and management tools within SQL Server, working with your HCI vendor to ensure proper infrastructure visibility is key. There are several benefits to aligning your HCI management framework with SQL Server.

Visibility into hardware and software. Whether you’re running SQL as a VM or as a locally installed instance – having visibility into your entire platform is critical. HCI aims to combine management of hardware and software into one framework. Make sure to plan out management when you design your SQL environment.

Integration with virtualization and cloud systems. Being able to scale SQL is key. Working with virtualization technologies has several benefits around agility, resiliency, and even scale, ensuring that the infrastructure housing SQL can scale into cloud when needed. Furthermore, current hyperconverged systems allow you to integrate your SQL ecosystem with a hypervisor agnostic HCI architecture. This ensures complete flexibility and integration with cloud platforms.

Proactive scale abilities. SQL is not often a set-it-and-forget-it technology. You’ll need to make adjustments, and scale accordingly. Make sure to work with a data center solution which can efficiently, and economically, keep up with your SQL requirements. HCI, with its node-base, web-scale architecture and software-defined approach, creates rapid scale capabilities by dynamically provisioning resources when needed.

SQL Server 2016

Final Thoughts

It’s clear that Microsoft SQL Server is one of the most heavily leveraged database platforms in today’s IT ecosystem.

Organizations are deploying distributed SQL environments with numerous instances serving various business and IT functions. The flexibility of architecture has led to a wide variety of SQL Server installations, ranging from large data warehouses to small, highly specialized departmental and application databases.

However, as powerful as SQL Server may be, it’s absolutely critical to design and deploy the solution with infrastructure and data center best practices in mind. Working with hyperconverged solutions allows administrators to combine the most critical SQL components into one engine, all working for the business. HCI enables resiliency, replication of data, predictable performance and a validated architecture for SQL Server solutions.

As you plan out and deploy your own SQL environment, make sure to follow deployment or migration best practices – and work with a hardware vendor who can meet your business needs both.

nutanix_logo

Best Practices Guide:

Virtualize MS SQL Server on Hyperconverged Infrastructure

This document makes recommendations for designing, optimizing, and scaling Microsoft SQL Server 2016 deployments on Nutanix. It shows the scalability of the Nutanix enterprise cloud platform and provides detailed configuration information for the cluster’s scale-out capabilities when used for SQL 2016 deployments.

 

Download this technical best practices guide to learn:

  • Advantages of leveraging web-scale converged infrastructure for Microsoft SQL Server.
  • Expanded detail on SQL Server best practices for Nutanix.
  • Design and configuration considerations when architecting a SQL Server solution on Nutanix, including SQL Server 2016.

Learn more about Nutanix for SQL: Visit www.nutanix.com/sql

Start typing and press Enter to search