Posted March 10, 20168 yr This post was authored by Erin Chapple, Director of Program Management for Windows Server. 2016 is a big year for the Microsoft datacenter product portfolio. We have new versions of Windows Server and SQL Server coming out, both available in Technical Preview now. The Windows Server and SQL Server partnership is a strong one that spans decades. From the early days in 1993 with Windows NT and SQL Server 4.21 to today, where Microsoft is recognized by Gartner in the OLTP Magic Quadrant as surpassing Oracle and IBM1. The combination of Windows Server and SQL Server is also the most frequently deployed commercial database platform. Today I’d like to showcase some of the important Windows Server 2016 features that are driving the next wave of SQL Server 2016 scenarios in massive scale performance, unparalleled database security, and flexible high availability and disaster recovery. A central goal of the Windows Server and SQL Server engineering partnership is to provide our customers with an unmatched price-to-performance ratio. In the current production release, SQL Server 2014 running on Windows Server 2012 R2, you can reach $0.73 per QphH in decision support workloads2. For OLTP support workloads you can reach $126.49 per tpsE3. Those figures represent industry leading performance and cost efficiency. We got there in no small part due to our Windows Server memory and CPU configuration maximums. In the case of these benchmarks the max configuration possible was 4TB. We’re increasing that in Windows Server 2016 by 3x, and allowing you to run up to 640 CPU cores. We’re working on a larger class of benchmark tests with our ecosystem, now that we can move beyond the 4TB RAM limit and already seeing good performance in our internal testing on these larger servers. Windows Server 2016’s investment in Storage class memory (SCM), including nonvolatile dual inline memory modules (NVDIMM) and nonvolatile memory express (NVMe), provides direct value to SQL Server. SQL Server 2016 performs better using Windows Server and NVDIMM in dealing with transaction logs because the database doesn’t have to incur latency waiting for the disk system to flush to persistent storage – the DIMM itself has persistent storage capabilities. Using SCM also has the effect of decreasing CPU usage for the same size workload. With 12 terabytes available to the data platform from the OS, system analytics will complete faster and can be more sophisticated – you used to have to work around complexity at the database tier with multiple queries and logic at the application level and now you can just ask the question you want the answer to. Gone are the days of scaling out a cluster just so you can scale up performance. New security features in Windows Server 2016 are taking protection to the next level. Armed with Control Flow Guard, Device Guard and Credential Guard, Windows Server 2016 is more resistance to unknown attacks. The addition of Privileged Access Management (Just In Time and Just Enough administration) enables complete separation of duties between the OS administrator and SQL administrator and of course enhanced logging provides threat detection with the appropriate information to detect malicious activity. We layer complementary security features in the server and database levels to provide unparalleled security for database workloads. Our investments in whole-platform security are paying off in the industry, and the proof is in the data. Digging into Common Vulnerability and Exploit (CVE) data and National Vulnerability Database (NVD) data from Mitre and the US National Institute of Standards and Time (NIST), we can see that in the last 10 years the combination of Windows Server and SQL Server yields the lowest percentage of CVEs issued among common datacenter operating systems and database systems. 4, 5 Windows Server features are the force behind SQL Server high availability scenarios. Using them, SQL Server deployments are able to reach “five nines” of availability, and in Windows Server 2016 you can upgrade a server cluster with no down time. Our goal is to deliver simple, flexible high availability and disaster recovery (DR) scenarios that deliver higher database uptime. In this category we’ve been working to be sure that the complexity of the deployment aligns with the sophistication of the solution. If all you need is a remote cluster for DR purposes, you don’t have to spend as much time structuring your directory to meet system requirements – the clusters don’t even have to be in the same domain. However, if your solution requires multiple roles or organizations you can still set it all up in Active Directory. The partnership between the Windows Server 2016 and SQL Server 2016 engineering teams has resulted in a simple, flexible HA and DR, unparalleled database security, and cutting edge performance with massive scale. You can try the solution out today with the Windows Server 2016 and SQL Server 2016 Technical Previews. We would love to hear about your experience using the product. Leave a comment below or engage in our UserVoice forum. Gartner positions Microsoft as a leader in the Magic Quadrant for Operational Database Management Systems Benchmark established for TPC-H tests, lowest price/performance at 1,000 GB, 3,000 GB, and 10,000 GB size, non-clustered. Full results available at TPC.org. Benchmark established for TPC-E tests. Full results available at TPC.org. Data acquired from the National Vulnerability Database. Excluding database products unavailable for the 10-year period. Continue reading...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.