Microsoft SQL Server 101 | Shaun Stuart | Skillshare

Microsoft SQL Server 101

Shaun Stuart

Microsoft SQL Server 101

Shaun Stuart

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
41 Lessons (4h 50m)
    • 1. Introduction

      2:24
    • 2. What Is A Database Management System?

      5:42
    • 3. Microsoft SQL Server

      5:33
    • 4. Microsoft SQL Server Licensing

      1:36
    • 5. SQL Server Managment Studio (SSMS)

      8:29
    • 6. SQL Server System Databases

      4:30
    • 7. Data And Log Files

      15:23
    • 8. Data and Log File Growth Settings

      6:14
    • 9. Filegroups And Full Text Indexes

      4:38
    • 10. Database Settings

      6:28
    • 11. Recovery Models and Backup Types

      11:09
    • 12. Backup And Restore Demo

      6:55
    • 13. An Introduction To SQL Server Agent

      13:47
    • 14. Management Features In SSMS

      14:43
    • 15. The Dedicated Administrator Connection

      5:55
    • 16. Database Objects: Tables

      7:30
    • 17. Database Objects: Views

      4:58
    • 18. Database Objects: Indexes

      22:30
    • 19. Database Objects: Stored Procedures And Other Objects

      15:14
    • 20. Security: Built-In Security Roles and Schemas

      22:24
    • 21. Security: Schema Demo

      3:44
    • 22. Reducing Database Size With Data Compression

      6:04
    • 23. High Availability Solutions: Clustering

      5:14
    • 24. High Availability Solutions: Replication

      4:42
    • 25. High Availability Solutions: Log Shipping

      1:58
    • 26. High Availability Solutions: Mirroring

      7:18
    • 27. High Availability Solutions: AlwaysOn

      2:00
    • 28. Suggested Maintenance Jobs: Backups

      12:11
    • 29. Suggested Maintenance Jobs: Advanced Backup Methods

      2:45
    • 30. Suggested Maintenance Jobs: Backup History Maintenance

      6:42
    • 31. Suggested Maintenance Jobs: Index And Statistics Maintenance

      16:04
    • 32. Suggested Maintenance Jobs: Cycle Error Logs

      3:04
    • 33. Suggested Maintenance Jobs: Database Mail Maintenance

      2:00
    • 34. Suggested Maintenance Jobs: DBCC CHECKDB

      5:12
    • 35. Suggested Maintenance Jobs: Data Compression Maintenance

      1:40
    • 36. Troubleshooting: Finding The SQL Server Version Number

      2:55
    • 37. Troubleshooting: Why Can't I Log In?

      3:36
    • 38. Troubleshooting: Why Is My Query Taking So Long?

      7:02
    • 39. Troubleshooting: What Is SQL Server Waiting On?

      5:13
    • 40. Troubleshooting: Creating A Performance Baseline

      3:30
    • 41. Conclusion

      1:03
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

267

Students

--

Projects

About This Class

By the end of this course, you will understand basic database concepts. You will know how to use SQL Server Management Studio (SSMS) to manage your SQL Server. You will know how to perform and schedule basic database maintenance tasks. You will know how to perform basic troubleshooting. Basic knowledge of T-SQL Is helpful, but not required.

Meet Your Teacher

Teacher Profile Image

Shaun Stuart

Teacher

Hello, I'm Shaun.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

phone

Transcripts

1. Introduction: Hello and welcome to Microsoft Sequel Server 101 Everything You Need to Know If you Are junior or involuntary de be a My name is Sean Stewart, and I've been working with Sequel Server for over 15 years. I have several Microsoft certifications, and I was first certified back on Microsoft sequel Server seven Point. Oh, since that time have obtained certifications on Sequel Server 2005 and Sequel Server 2008. My Day Job. I'm the senior database administrator at the largest credit union in Arizona. I'm also the co author of the book Tribal Sequel, which is due for publication in October 2013. Intended audience for this course is those people commonly referred to as involuntary DB A's meaning people who typically have no experience with databases but find themselves in charge of sequel servers simply by virtue of the fact that they work in the IT department. It might work in a small company with small I T staff and the primary role, maybe as a network administrator or Windows administrator or something similar. But the company needed somebody to look after the sequel servers, and they were volunteered. The other audiences Jr D Be a is thes. Maybe people just starting out in there d be a career. Or perhaps they were deviates on another platform, such as Oracle, and now they're working with Microsoft Sequel Server for the first time. In either case, some knowledge of the SQL language is helpful, particularly Microsoft Implementation, which is called Transact SQL or T SQL. We won't really be delving into the language too much, but some basic knowledge is helpful. Also, why most of what I show you will be done using sequel servers. Graphical interface. Everything can also be done via T SQL commands, which are typically called scripts. I'll be writing us and scripts later on, and we'll also show you how the graphical interface congenital some scripts for you. Just a brief aside here. I've mentioned the term DB a or database administrator a couple of times, but what exactly is a D V. A. My favorite definition is the D. B. A is the protector of the data. Is the job of the DB A. To make sure the data is safe, that is being backed up as only being access by the appropriate people of programs, and that this access happens in a timely manner 2. What Is A Database Management System?: What is a database management system? In the most general of terms, a database is an organized collection of data. A database management system, therefore, is an application designed to allow for the definition creation, querying, updating and administration of the databases. Most implements some variant of the ANSI standard structured query language programming language, also known as SQL. They typically come in two flavors. Relational databases and non relational databases. A relational database is a database where tables can have primary keys, which uniquely identify a single row in a table. And they can have foreign keys, which reference primary keys and other tables. This establishes a relationship between tables and is useful for ensuring data integrity. For example, you could have a customer's table, and the sales orders tape order table could have a foreign key to the customer's table, which would ensure that every record in the orders table has a valid customer defined in the customer's tape. Relational databases also adhere to what is known as the acid principles. A stands for Adamis City, which means the transactions in the database are all or nothing. If you have a command that modifies multiple rows of ADA in one or more tables. All of those changes have to be made or none of them get made. See you stands for consistency. This means that all data that is written to the database is valid according to defined rules. For example, if you have a field in a record that is to find as an integer that field can Onley store integers. If you try and store a letter in that field, the operation will fail. Eyes for isolation. Concurrent transactions result in the same system state as if the transactions executed serially. For example, if two commands or trying to perform multiple operations on a piece of data at the same time one was Mitt must wait until the other completes. Otherwise, the second operation might modify data before the 1st 1 is finished with its modifications , and that could result in invalid data. And finally, D stands for durability. Committed transactions remain committed even if the system crashes or loses power. What this means is, once the database reports to the client application, that transaction has been committed, has been written to disk. It stays written to disk even if the system crashes. Microsoft sequel Server is a relational database. Non relational databases, on the other hand, do not adhere to the acid principles. They're mainly used for storing vast quantities of information in geographically distributed storage, and they're optimized for read access. They also offer eventual consistency. Some examples of this are simple DB, which is the database platform that amazon dot com uses Mongo, DB and no SQL. Because the data is distributed in several locations, data can be retreat from whatever data store is fastest, which is usually the one closest to the requester, or client application. For example, Amazon may have the inventory database distributed in data centers across the United States . Someone placing an order in California might access the copy of the database story in California. Someone New York might access the coffee of the database in New York. This increases speed by reducing network. Layton see eventual consistency means given enough time with no changes to the data. All copies of the data will eventually contain the same values. So if someone places an order in California eventually, that order will be replicated to the database in New York. This, however, produces some challenges. For example, what if there's only one copy of a book left in inventory, and a customer in California and a customer in New York both want to purchase that book if they place their order at the same time, both orders will go through because each copy of the database thinks there's a book available to fill the order. However, when the orders eventually get replicated to all the databases, only one of them could be fulfilled. And which one that is depends on the business logic that the company implements. One of the customers were received the book. The other will get a notification that the order's been canceled because they're out of stock. In selecting a non relational database, Amazon has made the business decision that the benefits of speed outweigh the drawbacks of possibly inconsistent data. In other words, they would prefer that orders be accepted and processed as fast as possible, and they're willing to accept the risk that some customers may have order just canceled. I shouldn't know that relational databases can also have geographically dispersed databases . The difference is that in a relational database system, all copies of the database must be updated in order for the change to be finalized. This introduces late and see what all database copies confirmed. That the update has been completed. For this reason, geographically dispersed databases in a relational database system I usually use for disaster recovery solutions or reporting situations where the reporting application on Lee has a read only access to the data. 3. Microsoft SQL Server: Microsoft sequel server has come in a variety of additions, each of which has different restrictions on this operation. The major restrictions on the amount of RAM and CPU sequel server can use. And the maximum database size supported sequel server expresses the free version, and that actually comes in three different varieties. You can download this from Microsoft website and many programs that use the database. Bundle this with them. It's possible you have this installed on your desktop or laptop system and don't even know it. It's limited toe one gigabyte of RAM and has a maximum database size of 10 gigabytes is also limited to the lesser of one processor socket or four CPU cores. This means if you have a single eight core processor, this edition will only use a maximum four course sequel server. Web edition is limited to 64 gigabytes of RAM, the lesser of four processor sakis, or 16 cores, and has a maximum database size of 524 Petabytes Sequel server standard is similar to Web, but it also includes a program called Sequel Server Analysis Services and this the product for performing Data Cube analysis. But that product is only limited to 64 gigabytes of RAM sequel server business intelligence similar to standard. But in this case, there are no limits on memory for the usage of sequel Server Analysis Services. Sequel Server enterprise has no limits on the amount of RAM were no limits on the number of processors or CPU cores used. There are also some licensing benefits when it comes to using Virtual Machines, which will talk about in Model three. It also features additional functionality not available in other versions. And finally, the last addition of Sequel Server, a sequel server Developer Edition. This is identical sequel server Enterprise Edition accepted has a reduced licensing costs. As the name implies, it's meant to be used by developers, and it's not meant to be used in the production environments. A license for a single developer this time costs around $50 in each developer using the server must be licensed. For the purpose of this course, I'll be using the developer or enterprise edition. However, I'll note when I'm talking about features that are only available in these additions. Sequel server itself is composed of several different services, the first of which is sequel server. This is the database engine, and it's what most people think of when you mention sequel server. This is the program that does the reading and writing of data to the database. Sequel Server agent is the job scheduler in Sequel Server. It runs jobs at specific times and sends out notification to be email or by writing to the Windows Events Locks. It's a program that's usually responsible for kicking off back of jobs. No, that sequel Server Express does not have sequel agent. Therefore, if you wanna have scheduled backups on sequel, Server Express will need to use the Windows Task Scheduler and Batch files to execute those on a regular basis. Sequel Server Integration Services, also known as S S s, is a program used to import and export data from databases. It allows for the manipulation of data before, during or after the important S s. I s programs are called packages and there developed in visual studio or been this Intelligence Development studio, which is a scaled down version of Visual Studio that ships with sequel server sequel server reporting services, or S. S. R. S is a report creation tool. This includes a Web server that can display before it's created with visual studio sequel Server Analysis Services or S S s is an online analytical processing application that can be used to analyze databases from multiple databases, is also known as a data mining Tools sequel. Server browser is a service I listens for incoming requests for sequel Server Resource is and provides information back to the calling program about the sequel server instances running on the server. It helps with browsing for a list of available servers connecting to the correct sequel server, instance on a server and connecting to the special dedicated administration connection, which is called the A C. And I'll talk about that in Section nine. We'll be focusing on sequel, server and sequel, server agent integration services, analysis, services and reporting services are optional products, and they don't need to be installed to run Sequel server. They're also pretty much full fledge programs on their own, and they deserve courses on their own as well. All of these services can run under their own accounts or the same account. Best practices dictate that you should follow the principle of least privilege when selecting the accounts to run sequel under the accounts to use by the services should be different from each other, and they should not be used by any other services on the same server. Permissions for the account should be granted the active directory group membership. 4. Microsoft SQL Server Licensing: sequel, Server 2012 can be licensed in two ways. The first is with a server license and a client access license. In this situation, you have a client access license for every user that has to connect the sequel server. The other method. It's a license sequel server on a per core basis. In this method, you license every CPU core that's on your sequel server machine with a four core minimum. In this method, you get unlimited connections to sequel server. Note that if you're sequel server serving dated to a Web server or other application, which serves data to the Internet, you can not use the server plus cow licensing model. If you up for perk or licensing, there's a four core per processor minimum. Any must license all the processes in your system. One advantage of the perk or licensing model and with Enterprise Edition is that if you license all the processes and cores in your virtual machine host and includes software insurance, then you can run unlimited instances of sequel server on that host sequel. Server Express Edition is free and requires no license at all. Licensing terms for sequel server are complicated and they can change. This is a greatly simplified overview, so we shared to talk with your local reseller for the latest terms. And conditions, including any plans to transition from the prior licensing model used in sequel Server 2008 are to an earlier to the new model. 5. SQL Server Managment Studio (SSMS): the program used to manage and work with your sequel servers was called Sequel Server Management Studio or S SMS. This is installed on your sequel server with the Secret Server installation. You can also install only this program on your local workstation by using the same secret server installation media and only selecting management tools to install. Well, now walk you through a demo of using Sequel Server Management studio. When you first started SMS, you'll see the following screen toe work with sequel server. You must first connect to it, and to do that, you select the connect drop down box and shoes database engine. This will bring up the log in window. The server name is gonna be the machine name of your sequel server, and in this case you see him at a slash followed by Demo 2012. That's because the sequel server that's running on my machine, which is named our nine GZ 63 a has a named instance named the Demo 2012. If you just install sequel server as a default instance, you don't need to specify any name at all, and then you need to specify log in credentials. You can use Windows authentication, or you can use sequel server authentication, which is a sequel server. Log in. In this case, I'm going to use the S a log in and I'll type in my password. Impress. Connect what you connect the sequel server. You see a navigation tree over on the left. Pain, mostly is items should be self explanatory. Expanding the Databases folder, we'll show you to sub folders called System databases and database snapshots, followed by a list of all user data bases. In this case, I only have one database, and that's Adventure Works 2012. This is a free demo database created by Microsoft, and I'll be using it for examples. If you expand the system databases full of, you'll see four databases that are present on every sequel server, Master Model, M s TV and tempt EBI. I'll go over each of these in more detail in Section five. For now, just know that these air system databases will see what where sequel server keeps track of various configuration data and you'll rarely need to go into these database. Snapshots are read only copy of a database that's created for a particular point in time. This is useful if someone's making some large changes to a database, and they may need to revert back until unchanged data if something doesn't work out in most cases, creating a snapshot as much quicker than making a new backup, especially for large databases. Snapshots are also handy for reporting applications because they can query and unchanging version of production database. There are some limitations you should be aware of when using snapshots. First. If a database snapshot is present, sequel server has to track all the changes to the database since the snapshot was taken. This means snapshots can quickly consume large amounts of displace when used on a busy database. Sequel has to store copies of the unchanged data for the snapshot, plus the change data in the source database. As time goes on and more more data's change, the storage requirements increased dramatically. You can create multiple snapshots of the database, but if you need to revert a database back to snapshot version, for instance, if you have data corruption or some changes that were made that shouldn't have been made, it could only be one snapshot present, so you'll have to delete all the snapshots except the one you want to refer to. In short, snapshots are not a suitable replacement for database backups. They're best suited for short term usage and should be dropped as soon is no longer needed to avoid consuming large amounts of this space. Now let's look at the user data basis. Each database will have the same sub folders. I won't go through each of these, but the important ones for deviates toe wear A to be aware off our tables, views and program ability. Expanding the Tables folder. We'll give you a list of all the tables in the database to view what is in the table. You can right click the name and shoes select top 1000 rows over on the right portion of the screen. You can see sequel Server management studio automatically generated the script for you. And this script is just a sequel command that says display the 1st 10001st 1000 rows of this data in this table and in the bottom portion of the screen. That's what you see now. In this case, there's only 16 rows data in the table, so you only have 16 rows returned sequel server defaults to Onley displaying the 1st 1000 rows. This is a setting you can change under options, but it defaults to this to prevent someone from inadvertently opening a table that contains millions of records and trying to pull all that data back. And I could take a long time and consume a large amount of memory and close that tab if you want to see more information about the table. So through what columns that contains or any indexes on it, you can expand the table either by double clicking the name or clicking the plus sign next to it. I'll talk more about keys, constrains, indexes and these other things in another section. But right now, just know this is how you can find the stuff. If you look in columns, you'll see all the columns in the data base. If you recall when we did A when we looked at all the data, here's department I. D name group named Modified Date. These are the columns will sit here and you can see what their data types are. P K means department ideas. A primary key data type of small end and has not know which means it will not accept no values. Name. Field is in Hvar Char 50 characters saying with group name modified date is a date. Time data type views are very similar to tables. Talk about them in another module program ability. If you open that one and look under store procedures, this is the important one. A store procedure is like a little program. It's a piece of SQL code that's pre compiled and ready to execute one of more commands programs. You store procedures to perform tasks that have to be done frequently or to enforce and four security constraints. This is because you can know how somebody to execute a store procedure that modifies views or tables without giving that user permission to access the table directly. For example, if you have a table that campaign salary information, you might want someone in HR department to be able to update somebody salary. But you don't want them to be able to go in and just look at everybody's data. You could make a store procedure that will update somebody salary and grant that HR push person permission to run that store procedure. But not give them permission. Look at that table. We'll discuss some of the other folders in another section, but this will give you a little overview of how s SMS works. You basically have a list of objects. Sequel server on the left and the right will show you the sequel commands that operate on the objects in the results of the command appearing in the bottom with screen. 6. SQL Server System Databases: in the previous section, we briefly caught a glimpse of some of the system databases using SMS in this section, I'll go over those in more detail. Let's start with the first in the list. Master the Master Databases, Where Sequel server stores, all the system level information about the server. This includes system configuration, Loggins and information about any user data bases on the server. Obviously, this is a very important database, and in fact, if this database becomes corrupted, damage or unavailable sequel server can't even start. The main thing deviates will need to be concerned with regarding the state of race is to make sure is being backed up on a regular basis. The model database is an empty database at Sequel Server uses as a template for when a new database is created. This is useful if you want all your user data bases to be configured a certain way or to maintain a certain tape roll store procedure, etcetera. Any new database is created by copying this database and renaming it to the name specified in the Create Database Command. This is what sequel server does behind the scenes. All you need to do you specify Create new databases. MST Be is a database where sequel server stores information about the schedule, jobs alerts and backup history for the server. We'll talk more about these items in Section eight when you look at sequel server agent. But suffice it to say this is another system database that d be a is need to be shares being backed up regularly. Temp TB A sequel server scratch pad and is analogous to the Windows Swap File In Windows on the Windows operating system, a sequel server has to perform any commands that require more memory than is available in RAM Sequel server. We used empty be as temporary memory. It's also a sequel. Server performs large joins and sort operations, temporary tables created by procedures or commands or sword here as well. Needless to say, tempted be can have a large impact on system performance. In most cases, you want to set temp to be up to have at least two data files sized equally systems with large number CP Use. You want to set up one day to file for each 2 to 4 CPU is in your system, up to a maximum of eight files This is important because sequel server can split queries up into multiple processes that are executed on each CPU. This is a process called parallel ization. Each CPU can then access one of the tempted be files instead of having to wait in line to access a single file. This bees up simple system performance. Some might raise the question. In that case, why wouldn't we make one file per CPU? The answer is that there is some overhead involved with tracking the usage of all the data files. Also, sequel server has optimized temp TV usage, especially in later versions such as 2008 in 2012. Testing by Microsoft and others have determined that one data file for 2 to 4 CP use with a maximum of eight files is a good rule of thumb to follow that provides good performance. Tempted be is re created from scratch every time sequel servers restarted, and it does not contain any persons to data, so there's no need to back this up. There's one additional system database that does not show up in S SMS. That database is called the resource database. You'll never see it s SMS But if you're poking around on the servers, hard drive for the system Database files are stored. You'll see the files. The Resource Databases, a read only database that contains all system objects included with sequel server in older versions of Sequel server. These objects were stored in the master database. Microsoft started, including the resource database in sequel Server 2005 as a means to make upgrades faster. Party to Sequel 2005. If system objects changed between versions, upgrade process would have to run scripts to drop and recreate the objects, which could be time consuming if a large number of objects changed with using a resource database. Now the upgrade just needs to copy a new database onto the server. 7. Data And Log Files: in this section will cover database files on log files, some miscellaneous file information such as file groups and full text indexes, and some common database settings. Databases and sequel server can use one or more data files. More than one data file is used to allow faster access and concurrent disc axes. For example, if tables of stored in different files sequel server can read data from tables at the same time if they're in different files, rather than wait for one read to be completed before the other starts. Another strategy also employed in very large databases. And by that I mean databases that are on the order of hundreds of megabytes. Two Terabytes is put data tables in one file and indexes in another file. I'll talk about indexes and their importance in Section 14 when sequel server creates a database file, it first writes out all zeros to the final on this to initialize. This also occurs when the database has to increase inside is added to it. Four. Win Data's restore from backup. When you're dealing with large databases, this could take some time. One way to speed this up without sequel server to use instant file initialization. This is a feature that was added to Windows XP, Windows Server 2003 and essentially hands off a process of zoo. Bring out the data file to the operating system. A sequel server rights to the expanded file area. The OS needs that nose to write all zeros before writing actual data. The time savings for this could be significant. One test of showing that a 2020 gigabyte database takes 14 minutes to be created without instant file initialization, and only about 1.5 seconds with instant file initialization because it's also affects restores of backups. This could greatly speed up your disaster recovery time. Keep this in mind if you have a large database and your company expects you to be able to restore it in a short amount of time in order to utilizes the file Initialization, the accountant sequel server is running under must be granted that performed volume maintenance right on the server. If this privileges granted sequel server will automatically uses the file initialization when you initially grant this right sequel, server service must be restarted in order for it to take effect and to remove this right requires a system reboot. One important thing to note about using instant file initialization. If your company is very security conscious, this might represent a security risk because the old data on disk is not over. Written immediately when the database file expands, it may be possible for someone with the right sequel knowledge and access to the database to read the old data that existed on the DIS sectors used by the database expansion. This is typically a small risk in the performance. Gains usually outweigh the straw back by default. Secret Server data files have extension of MDF. Typically, the primary data file has an M D ethics extension and other data files haven't ended. DF extension. These are just convention in sequel server does not enforce them. You can specify any file extension you want when the database is created. Lastly, data file should rarely, if ever be shrunk. You may encounter our job on a sequel server that shrinks the database file on a regular basis. This may have been implemented because the drive the database was on kept running out of space. They're also very few instances in which a database, file or date of a should have shrunk should have a shrink operation performed on these situations are usually when a large amount of data roughly 50% or more of the database has been deleted or when data compression has been enabled in the database now has a large amount of free space in it. Shrinking is bad for data files because it creates nearly 100% fragmentation in the database. Think of a fragmented hard drive and how that degrades Disa performance, and you'll understand how fragmentation effects database performance. When a databases shrunk. Sequel Server takes all the free space in the file and moves it to the end of the file. At the end of this process, it returns the empty space at the end of the file to the operating system. The sequel is rearranging the data in the file. To do this, it takes data from the end of the file and moves it to the first available space. At the beginning of the file. The shrinking produces nearly complete fragmentation. In a request to access the data for result in many more moves of the disk, read write head, which would decrease performance. I'll talk more about fragmentation. How to detect and remove it In Section 14 Roger two is log files. First, let's briefly described what a log file is. Data files are fairly straightforward. That's where the data stored in the database. A log file was also called a transaction lock because it's used for date disaster recovery and to help achieve the durability component of acid properties we discussed in Section two . Never changes Made to data Secret Server First writes the command to change the data to the log file. Then it makes a change to data in memory, and a message is sent back to the user program that the command has been completed sometime later. That change is actually written out to the data file. The time between when the changes made in memory and when the change actually gets written to the data file is a dangerous time without a log file. If the server loses power during that time, that change would be lost. The log file is the method by which sequel server guarantees that the change will be saved in the event of a disaster. Why does sequel server perform this way? You might ask, wouldn't it make more sense? Just write that change dated to the data file before telling the user that the command completed. The reason, once again, is speed sequel Server stores data inside the data files and in memory in eight K chunks called pages. Sometimes all the pages from one table are located in contiguous sectors on the disc. Sometimes they aren't and when they are in the data set to be fragmented. Sometimes sequel server can fit a new record on a page. Sometimes it can't when it can't it awesome often has to move data around other pages to make room. In short, writing data to the data file can involve lots of disk io, even for a small change. And that takes time. So to speed things up for the user sequel server uses log file. Imagine a command that changes a 1,000,000 rows of data. It takes longer to write out the millions of change rose to disk than it does to write the one update command to the log file, logging in the command and changing the data in memory. It's much faster than writing out that 1,000,000 change rows of data to disk, therefore sequel server writes the commands of the log file changes. The data memory reports back to the user that the change is complete and sequel speak we call it. We say the transaction has been committed. Meanwhile, a process called the Lazy Rider runs in the background. Its sole job is to write all the data that is a memory that has changed out to the data file. This is a simplification, and there are some cases where sequel logs actual change data pages in the log instead of the command. But those air somewhat special cases, as you might expect, log file is, by its very nature, a sequential file. It's a list of one command after the other that changes data in the database. For that reason, unlike multiple data files, having more than one log file provides no performance benefit. Sequel server will allow you to create multiple long files for database, just as you can create multiple data files. But there's no performance benefit to be gained by doing so. Sequel still has to use one at a time. Let's take a moment now to briefly discuss the internal structure of a log file lock file is composed internally of smaller chunks called virtual log files. The Zahra. Smallest years of storage with a long file. The log file is also circular meaning as it fills up the virtual log files. Towards the end of the file, it was circle around to the start of the file and begin using those virtual log files again , with one big caveat. In order to re use a virtual on a file or feel left sequel server must know that all the transactions in that via left have been committed. I written to the data file or the log file itself has been backed up. Only then will see it will be able to reuse it. Previously used via left. The difference between the two methods to setting a VLF for reuse is dependent upon what recovery type databases using cover recovery types in more detail in section seven. But in short, the two main recovery types are simple and full in simple recovery mode. VLF. It's available for use as soon as the lazy writer has written all the changes caused by the transaction to the desk to the data file. Sequel server automatically notes this and marks a VLF is available without any action required by the DB In the full recovery mode, however, VLF is not cleared for reuse until the transaction log itself has been backed up. This mode of operation allows sequel server to provide point in time recovery of a database . It also allows the data to be recovered even if the data file becomes corrupted. The full recovery mode is responsible for many systems running out of disk space, especially those companies that do not have a full time, maybe a this because a database backup will not allow the VLF to be reused. The only thing that will do that is a transaction log back. Unless the transaction log is backed up sequel Silver will not be able to reuse any BL elves in the transaction law, and it will see simply keep expanding the log file as it needs more room. So make sure taking transaction log backups of all your databases that are in full recovery mode. If you don't have to know how to do this, I'll be covering this and Section 14 here we can see example of a log file. It's got five virtual log files inside of it. The ones that are currently in use are virtual log files three and four. Virtual log flower five has not been used yet. As time goes on, more transactions, they're done. Virtual log file four will fill up, and then virtual lot of foul five will be used if we're in full recovery mode. Virtual log files 123 and four will not be able to be reused until the transaction long is backed up. So if that's the case sequel single, have to expand the long file and create a virtual log file. Six, seven and eight etcetera, and it will keep growing. If, however, you did take a backup of the transaction lock, then virtual log files one and two and three and four will be marked available for reuse. And in that case, when sequel server gets to the end of virtual Log file five, it will cycle around again without making the transaction log any bigger and go back to the beginning of the file. Start using virtual log file. One is important to note that sequel server cannot use instant file initialization for log files when you create a new log file or when sequel server has to expand an existing log file, it must write out all zeros to the entire file or the new space is being added to an existing files. This is because the long file issues for disaster recovery and sequel server has to be able to tell where the valid entries in the log file end. To see why this is, we need to remember how instant file initialization works on what would happen in the event of a database crash with a cinephile initialization. The program will request a file to be created a particular size, say, 10 megabytes. The operating system will create a file and allocate 10 megabytes of disk space to it. At this moment, file contains whatever data happened to be on those dis sectors at the time. Until the program starts writing to the file, zeros will not have been written that to clear it out. Furthermore, zero's air, only written as a program uses more of the file. So far, Transaction Log wrote five megabytes of data. The last 55 megabytes of that file will still contain essentially random data. Never file happened to use those two sectors previously. Now suppose sequel server crashed, and you need to read the log file to rebuild the database. There's no way for sequel server to tell when the valid data ends and where the random old day the starts. Therefore, log files must always be zeroed out before they could be used. This allows sequel server to determine where to stop reading the log file in a disaster recovery situation. As with data files, shrinking a log file is bad, but for a different reason. Recall the log file is a circular file and sequel server will grow the lock file only if there were no more VLS available. If you're backing up the long file on a regular basis, then it will grow to the size needed to support the volumes of transactions. The databases experiencing shrinking the log file will simply force sequel server to grow it again later, and you'll become locked in a cycle of shrinking and growing the log files. And because instant file initialization cannot be used with log file, expanding, a log file takes a large amount of disk io, and that, in turn slows down performance Instead of repeatedly shrinking your log file. You should be taking more frequent log file backups and that will stop the transaction log from growing. If you were taking long, long back ups every hour, increase that to say, every 30 minutes. This will more quickly clear VL efs and allow sequel server to reuse them rather than increasing the log files. Lastly, by default sequel server create log files with with an extension of LDF as we saw what data files. This is merely a convention, and it's not enforced by Sequel server. You can supply whatever file extension you want to have the time of database creation. 8. Data and Log File Growth Settings: So how do you tell what files the databases using? If you're in Sequel Server management studio right, click the database and choose properties. Then click on the Files page. You can see the file path and the final name that is being used by, in this case, the data file. And here is the path and file name for the lock file. One thing to know is this column. Here, the Auto Growth and Max eyes calm. The auto gross setting determines the size of the chunk. That sequel server will expand the file in if it needs to grow. For example, if your data file is full and you have a descent to grow by 10 megabytes and sequel server will increase the data file in increments of 10 megabytes until as a space it needs, even if he only needed additional one que in that data file. It's going to increase it by whatever size you set here, and you can set this by clicking this button. And here's the window that appears. I notice there's two options here. You can grow the sile the file by a fixed amount specified in megabytes or by a percentage Normally you want to grow by a fixed amount because that allows you to better manage your growth. The percent setting will cause files to grow by an ever increasing amount. For example, if you have a 500 megabyte data file and he said it grow by 10% the first file growth will be by 50 megabytes. The next one will be by 55 megabytes, and the next one will be by 60.5 megabytes, etcetera. The gross things for log files should be chosen carefully if you're managing your log backups, the log file should rarely need to grow, however, Sometimes someone will run a query that requires lots of long space, and the law will need to grow. Remember that law? How can I use instant file initialization? So Sequel server It will need to write all zeros to whatever size of lock file grows by. So if you set the long file growth to be 100 megabytes every time the log file grows sequel , several have to write out 100 megabytes of zeros. No other data changes can happen in the database until that has been done, because all data modification statements must be logged. No other changes can occur until those 100 megabytes of zeros have been written to disk. Therefore, try to keep your logro settings to reasonable but small number. I typically use between 10 megabytes and 50 megabytes, depending on how active the data basis. The other option you can set in this column is the maximum size of files. There should be self explanatory, but be aware of the consequences. If you set a maximum file size best fighting megabytes and that limit is reached, the database may come to a standstill. If the maximum size for data file is reached, no more data can be added to the database and possibly no more updates can be done. Some may. Some some updates that require additional space will not be able to be done, but some that don't require additional space could probably still go on. Any read statements such as just select statements would still be able to be processed. However, if it's the log file that runs up against the maximum file size, the database will grind to a halt. Nothing can be done because sequel server requires that all data modifications be locked, even if some commands you think might not be logged might not be available. One such command is the log backup. This command, although it does not actually changed data in the database, is the Log Command, and therefore it can't be executed if the long file is full. So the student student realize this poses something of a dilemma. In order for the log file to be able to be reused, you need to make a backup of it. But if it's full and it can't log anything, then you can't make a back of up. So what do you do? In this case? There are two options. You increase the maximum file size, the low file by an amount equal to the gross setting and then immediately take a backup. If that's not an option. If, for example, the hard drive's full and you don't have any other space, the other option is the only time in his advantageous to have more than one lock file on a database. In this case, you would add another log file to the database on a different drive that has space and then make your backup and you do that you buy, click, add and give it a logical file name changes to log file. Set your size settings to whatever you would want it to be. And here you have your path and you type in the final, and then you would click. OK, In this case, you can now take your log back up because sequel server will start using that new log file that you made. Keep in mind, though, this is for emergency situations and really what you want to do. Do this. Take your back up and then immediately get rid of that extra log file because otherwise it serves no purpose. This is really just a emergency situation to resolve problem you had with running out of lock file space. 9. Filegroups And Full Text Indexes: this module will cover some information about files that don't really fit into the other sections. Sequel Server Can Group Files into File Groups There are two primary reasons why people would do this. The first is for performance. If you group all your data files and one file group and place those on one drive Andrew Ball your index files on another file group. In another drive secret, Siri can access both of those final simultaneously, thus improving performance by using different drives with different I o paths, you can greatly reduce the disc io delays that are the bottlenecks of most sequel server systems. This is especially true of the indexes are what are known as covering indexes, meaning the index contains all the data needed for a query and no look ups to the underlying table I needed. The other primary reason for using file groups is for disaster recovery Flexibility. An enterprise edition only feature of sequel server is the ability to do an online piecemeal restore. This is a process whereby the backup files are restored in stages one file group at a time after each step. Some tables in the database, namely those that are stored in the primary filed group, and then the restored file groups become available to users or the other. File groups continue to be restored. This can be handy in large critical databases that, after have high availability. File groups can also be set to read only normally. As part of the enforcement of the Acid Principles sequel server will acquire locks on data and some cases when is being accessed. This is the president to commands for modifying the same data at the same time, or from one command from reading the data before another command has finished changing it. A consequence of this is that once one command will have to wait for another to finish, which can slow a query now, when Steeple Server detects that a file group is read only it's smart enough to know that that data cannot be changed. Therefore, when the querying tables in that file group sequel server won't bother taking locks on the data, this can increase performance, as I've alluded to earlier. When sequel server needs to expand the file, whether it's a data file or a long file, it will increase the size and certain increments which is defined at the database file level. Note that one sequel has grabbed the dis space for a database. It will not return it to the operating system unless you perform a file a database shrink. For example, If you have a database that's 10 gigabytes in size and you delete four gigabytes of data, the final size on disk of that database stays 10 gigabytes. Again, this is a performance enhancement, because expanding a file is an expensive operation from a disc resource of standpoint. Sequel server just retains the dis space so it can be used later. The only way secrets over released the extra space back to the operating system is by shrinking the file. I'll show you How can you tell how much empty spaces in a file with SMS? Lastly, there's another group of files that Secret server may use in addition to data files and log files, and these are called Full Text Index is a discussion of full text. Indexing is beyond the scope of this course, but just know that it's the feature of sequel server that is not installed by default. It provides a more robust method of searching large fields of text in standard sequel server text String search commands. The files used by full text indexes are called catalogs, and they're stored in the sub folder named F T Data Under the fold Over Sequel Server is installed. This is normally C program files. Microsoft Sequel Server Now switching back to S SMS here to see how much empty space than the file you can right click the database and shoes properties over here, you can see on space available. We have 14.75 megabytes. Total file size is 205 megabytes, so there's 14.75 megabytes of free space in this database file. 10. Database Settings: in this module will talk about a couple common database settings. There are several database options that debate should be aware of that can adversely affect performance. These can be seen in sequel server management studio right clicking the database name, choosing properties and then selecting the options page. This is also where you can set the recovery model of the database from simple or full ball clogged. I'll talk about these more in Section seven. The 1st 5 options are ones that we should probably pay close attention to auto close by default Sequel Server will hold open any files used by the databases on the sequel server. If you attempt to move, delete or edit them outside of sequel server via the operating system, you'll get a file and use error. If you set this property to True sequel Server Report. The files is not in use to the operating system whenever there no connections to that database. The reason you want this turned off or set the falls is that opening a file for use takes time. When sequel Surfer opens the database file, it runs a brief series of checks to make sure the file does not contain certain types of corruption if the databases repeatedly being closed and opened, this will result in decrease performance. Auto Create statistics Secret Server maintains statistics about the data stored within tables and their indexes. This helps sequel server access the data faster. Setting this option to true. Allow sequel server to create statistics it feels it needs better to perform auto shrink. Absolute certainties. Air rare. With most sequel server configuration options, there can always be found. Some case for one sitting will work better than the other. That's why the option exists. However, this is the exception to that rule. This option should always be set to false for the reasons discussed in Model one of this section. Shrinking databases is bad and should only be done in rare situations. Enabling this option. Tell sequel server to shrink the database whenever possible, not only with this create massive fragmentation in your database. You also have no control over when the shrink occurs. Your server could be chutney, long handling thousands of requests a second and all of a sudden the shrink will kick off. That would cause a massive slowdown in process. Keep this set to false Auto Update statistics. As your data changes, the statistics equal service created on that data will get out of date. By setting this to truth sequel server will detect when this happens and automatically recalculate new statistics. The threshold is usually when about 20% of the data has changed. If you set this toe false, then you the DB A will be responsible for telling sequel server learned. Update the statistics. You can do this but really probably have other things to worry about. Auto Update. Statistics A synchronously When sequel server decides, statistics need to be updated. When do you want them? Update with this? A set The false If sequel server needs to create an execution plan for a query and it discovers that the statistics are out of date. We'll update those statistics before it makes the plan for the query. If you set this to true sequel, server will create the query plan, using the out of date statistics that may save some time and creating the plan. But it could result in a plan that performs poorly. The way I remember these preferred settings is following false, true, false, true false. If you read down column in the order listed, The settings follow that alternating pattern false, true or false? True, false. The other option to look for in the Recovery section is in the recovery section, and it's further down on the screen. And that's page. Verify. This should be set to check some other options Are Tauron page detection or None As a way to detract. Detect corruption of data Sequel Server writes a check sum based on all the data stored on each eight k page of memory in the database. Older versions of Sequel Server is the method called Tauron Page Detection with Sequel Server 2008. They added a new, more robust method of detecting errors called a check sum. The newer check some method is recommended because it can detect corruption that Tauron page detection may miss. If you have databases that might have migrated from an older version of Sequel server, they may still be operating under the Tauron page detection model. This is a setting. You change it any time. The actual data pages will not get changed. To use the new method until some date on that page has changed. The other studies on this page congenitally be ignored by the involuntary DB A as they rarely, if ever, need changing from the defaults. All of these settings that we've just gone through could be configured on the model database. And then any nearly created databases will inherit these settings when they're created. Now, one important thing to note here with regards to SharePoint databases, SharePoint databases are somewhat unique beasts. SharePoint itself can be configured to perform database maintenance tasks such as updating statistics, defragmenting indexes, etcetera. And it has its own set of recommended database settings, which are different from the ones I've just shown you here. If you have a SharePoint database, I recommend searching Microsoft's website for their white paper on your figuring databases for use in SharePoint. The latest version, as of this recording is for SharePoint 2010 and was written by Paul Randall. I included a link to that text file in supplementary text file for this section 11. Recovery Models and Backup Types: sequel server has three types of database recovery settings. Simple, full and bulk log. Which type you use depends on how much data you're willing to lose. In the case of the disaster, the simple recovery model mean sequel server will only take full backups of the data base. The virtual log files and transaction log are marked is able to be reused after each transaction has been written to the data file. If a disaster occurs or the database becomes corrupted, you can only recover to the time the last full backup was made. There is no point in time recovery we need. You cannot recover your database to a particular point in time. Your only option is to restore to the time of your last full backup. If you're willing to tolerate up to 24 hours of data loss, you can take a full back up each night and you'll be covered. I'm going to skip the bulk recovery model for now. Come back to it later because it's something of a hybrid model. The full recovery mode mean sequel server can recover a database to a particular point in time, including up to the second before a disaster or crash. This setting requires that the transaction lobby backed up on a regular basis. Note that full backups must still be made regularly as well. If a crash happens, the restore process proceeds as follows. Ah, full backup is restored. Successive transaction log backups are restored until you get to the transaction log backup that contains time. You want to stop that. You restore that transaction log telling sequel server to stop at a particular point in time. In this scenario, you would typically be taking full or differential backups, which I'll cover in the next module, nightly or weekly, and transaction like backups every hour or every 15 minutes. Or however frequently you need in order to meet your recovery point objectives. The minimum allowable data loss. For example, if your company can only tolerate losing 15 minutes of data, you should be taking log backups every 15 minutes. Now we come to the both recovery month. This model is something of a hybrid between the simple, full recovery models. In this mode, the database runs in what is basically full recovery mode until a bulk load of data occurs . A bulk load of data is a method of importing data into sequel server quickly. This is typically done in nightly or weekly batch processes, and its figure, Lee used when large amounts of data need to be imported in this mode, rather than logging each individual record that is imported, which could generate lots of log records and take a long time when you're importing thousands or millions of Rose, Sequel will only log data pages from the database that change during the import. This greatly speeds up the import process because logging is kept to a minimum. The downside to this is that you lose the ability to perform a point in time recovery during the transaction log period when the bulk load occurred. For example, here we have a situation where we're in bulk recovery mode and we're taking log backups every hour on the hour. At 12:30 a.m. A nightly process performs a bulk load of data that finishes at 12:40 a.m. In this scenario, if the system crashed, you can restore to 12 a.m. or earlier for 1 a.m. For later. You could not restore to 12:05 a.m. or 12:45 a.m. because those times are within the log file backup when the bulk load occurred. The reason for this behavior goes back to have a transaction log works. Normally, the law contains all the commands that modify data in the database and during a log Restore sequel server simply replays all the transactions in order to bring the database back to ST desired. However, in bulk recovery mode, a bulk insert is log differently instead of individual insert commands. Being Log sequel on Lee logs the data pages that have changed. It has no idea of the order of the data changes occurred in or if the same date on the page change multiple times. For this reason, it cannot process a point in time recovery. To overcome this limitation, companies use Bowker, who used bulk recovery mode, usually do something like the following. They run normally in full recovery mode. When it comes time to do a bulk import, they take a log back up, switch the database to bulk recovery mode, perform the import, then switched the database back to full recovery mode and immediately take a long backup. This minimizes the amount of time where pointing time recovery cannot be performed. Sequel server can create three different types of backups Full backups, differential backups and log backups. A full backup is pretty straightforward. It complaint. Can't contains a complete copy of the database. If you were sore full backup, the only file you need is this backup file. A differential backup is a backup. That Onley contains changes made to the database since the last full backup was made. This is useful if you have a very large database. The differential allows you to reduce the back of time by only backing up the data pages that have changed since the last backup. In order to restore differential backup, you must have the last full backup that was made before the differential backup. Also note that these air not incremental backups. They contain all the changes that have been made to the database since the last full vaca. Not since the last full or differential back transaction lot backups we mentioned previously. This is a backup of the transaction log and is required for databases that are in full recovery mode. Use the transaction, log back up to restoring the database to a particular point in time. This could be kind of complicated, so let's look at an example. This slide shows a typical back of schedule. A full backup is taken on Sunday nights, and a differential backup is taken on the other nights. Transaction law backups like taking every hour and they're not showing on the slide. The group of backups is called a backup chain, and this is because you often need a chain of files to restoring the database to particular point in time. Let's go through some various scenarios. If you need to restore the database to Sunday night, you only need to restore the Sunday night full backup. If you need your store in the database toe Wednesday night, you need to restore the Sunday night full backup and then the Wednesday night differential back. If you need your store, the database to say Friday at 8:45 a.m. You need to restore the Sunday night full backup, the Thursday night full backup and all transactional or backups from 1 a.m. Friday morning to 8 a.m. Friday morning. Then, when you restore the 9 a.m. Friday log back up, you tell sequel server to stop at 8:45 a.m. Note that missing file can destroy your backup chain and limit your ability to restore database. For example, if we do not have the 3 a.m. Friday transaction, log back up for some reason, we will not be able to restore to 8:45 a.m. On Friday. There would be a gap in the law backups from 3 a.m. to Florian. The latest time we could restore do would be the last transaction in the 2 a.m. Friday log file. Likewise, if the Sunday full backup was deleted or missing, none of these resources barriers would be possible because they all start with that Sunday night backup. A couple of the points I'd like to make regarding backups. Restoring a backup file is slow transaction law. Backup file is slow. Recall that human we knew Restored Transaction Log Sequel server is actually replaying all the commands that happened during that time frame. This can be very slow. It's quicker to use differential backups to get us close as possible to the desired recovery time and then use log backups to get right up to the actual time. For instance, in our previous example, if you had all the log backups, it would be possible to restore to 8:45 a.m. Friday by restoring the Sunday night full backup and every single transaction log back up taken from Sunday night through Friday morning. That would take a very long time because you would basically, basically be replaying every single transaction that occurred during the entire week. Sequel Server 2008 Enterprise Edition and Sequel Server 2008 are two Standard Edition and later support. Native Backup Compression. A native leave Compressed backup can, however, be restored to any addition of sequel server. You cannot restore backup made on a newer version of sequel server to an older version. For example, you cannot restore Back Up Me. And on the Sequel 2012 Server to a Secret 2008 are two server going the other way. You can only restore backup from prior to versions of sequel server to a nerve newer version of Secret Service. For example, you can restore backup made on sequel 2008 two sequel Server 2012 but you cannot restore backup made on sequel 2000 two a sequel Server 2012 server. That's because there are more than two versions between those you have Secret 2000 2005 2008 and in 2012. If you're ever in a situation where you need to restore a secret 2000 back up to a secret 2012 server, for instance, if you're upgrading your servers and moving everything later version, you need to do this and basically two steps. You will take your 2000 backup and restore it to Secret 2008 server. Then create a new backup of the data base on the Secret 2008 server and restore that backup to the Secret 2012 service. No. When counting version steps for this sequel, 2008 and Secret 2008 are two are considered one version not to 12. Backup And Restore Demo: walking through a demo of doing a manual backup and restore again. Normally, your backups are going to be automated using sequel agent, but we'll show you how to do a manual backup here. And of course, your restores are usually done manually, and I'll show you how to do that. So to back up a database, you can right click on the database and sequel server management CEO, go to Tass and then back up. We're gonna first do a full backup so on our backup type will leave at us full back up the database and we're going to add We're gonna back up disk, we're going to add find him and this is the default path and we'll just say, just full backup. Got B a k. That'll be the name now. We could run it right here, but I'm going to do one thing. Go over here to the options page and I'm gonna check is verify. Backup would finished and perform. Check someone writing media. These are basically some data integrity checks that you can do just to verify that your backup is good. When you choose the verify back up and finish what'll happen is sequel. Make the back of file, write it out disk, and then it will read it all back in. It won't restore it, but it would read it again to make sure everything seems right. Nothing was corrupted. He performed checks on before. Writing the media means When sequel Server writes the database out to disk the back of file to disk, it's gonna put a check some in there. And then when you go to restore this sequel, server will see there's a check some there, and it restores the database. It will also calculate the check some and make sure those two values match so back to general. And I'll talk about some of these other options later. But right now we just do. The backup will say Okay, and you see it's executing. Here is making the backup, and it's done. Okay, now that was a backup, A full backup. Go ahead, and now we'll do a differential backup. It's almost identical we're gonna do is go say differential and remove that file. You could have been that to the same file we just made. I'm not going to now again, we're going to do this would call this one, uh, demo diff back up that p a k and again in my option page. I will check these two, general and back this up. You notice that was a lot faster than the full because it only backed up the data pages changed. Now we're not using the database. Nothing changed. So it went pretty much instantaneously. And similarly, you can do the same thing with the transaction. Log back up. Um, you do a back up and just say, um actually, we can't do transaction back in this case because we're in the simple recovery model. If you were in the full recovery model one, your options here would be transaction. Log back up. Now we'll go through and do a restore. Very similar. Just go to task restore. In this case, when we talked about you can restore files and file groups. If you're using Enterprise Edition, you can do it piecemeal. Restore. In this case, we're just going to do a database for store and what we'll do first, we will. We're gonna do a differential database restore now, In order to do that, what we're going to need to do is first restore the full backup we made and then restore the differential backup we made. Now, you notice when you bring up the restore tab and this defaults to restoring to the last backup taken sequel server tracks all the backups you've made. And when you bring this up, it defaults to the last backup taken. If you had if you were in full recovery mode, this would actually say to like the lad the latest possible time and do a timeline. If you and four cover mode you can actually pick, you know what time you wanted to stop that? We can't do this in this case because we're just in simple recovery mode. But you can see by default because we want to go to the last backup taken, which was the differential Backup sequel Server has automatically said, OK, I need to recover. These files need to restore this full backup first, and then I need to recover this differential backup. And it remembered where you wrote those out, too, and it will go out and pull them, and load options were going to do because we have the database already exists here is already copy. We're gonna override it. So in order to do that, we have toe klik, the overwrite box. We don't have any replication, which we haven't talked about yet, But that's a method for copying. Dated to other servers Way don't have that. So we don't need to worry about this. You can restore. You can restrict the access when you were storing the database. So, for example, if you've got a database is being used by hundreds of users and when you're restoring it before you want to let them back in, you want to get in there and look and make sure that everything is right before you let everyone loose in the database again, you can restrict this. So then on Lee, the database administrator can get into that database after it's been restored. We're not gonna do that in this case. Um, and that's pretty much all we need to worry about here. So we just click OK, and sequel server will go through and restore the state switch back to here and refreshes. I didn't do it fast enough It it beat me. But normally, if if its record, if it's restoring it takes a little while If you go over in here and refresh this, you'll see it'll say adventure works 2012 and then in parentheses, it will say we're storing and let you know the database is in the process of being restored . In this case, it actually completed too fast for me to do that. Okay. And there you have it. That's our database backup and restore. 13. An Introduction To SQL Server Agent: Section eight Sequel, server agent, Secret Service agent sequel servers, job scheduling system. It also handles sending alerts. One thing goes wrong Provided is configured to do so. No, that's sequel. Agent is its own service. It is possible to run sequel server without Secret agent running. In most cases, since equal agent is one kicking off the back of jobs, it's not advisable to do so. Any ship and figure sequel server to start on system startup. It's also recommended to configure it to auto start if it or a sequel server itself stop unexpectedly. These options can be found on the general page of the sequel Server. Agent Properties may get to that in Sequel Server Management studio, right click on sequel, server Agent and Choose Properties. Here's a general page Interior Options to start sequel, server and sequel server agent. If they stop unexpected, the advanced page contains options that the involuntary deviate probably won't have to deal with, and these defaults can be accepted. The Alert System page contains some important information. First, in order for sequel agents, descend email alerts. You must enable a male profile for it to use, and I'll discuss male profiles in the next section. If you still have anyone that uses pagers, you can inter page your information here. Note, however, that much like pages themselves, this feature has been deprecating and will be removed from future versions. Sequel Server. The fail safe operator is someone who received an alert. If the designated operator could not be reached. Took in replacement instruct sequel server interplay, Certain tokens or variables in the alerts it sends out. Tokens can be placed in jobs and job steps. These are another one of those things that involuntary de be a probably won't be dealing the job system Page just has to items on the shutdown time out Interval is a setting that tells sequel server agent How long to wait when it's in the process of shutting down for running jobs to complete before canceling them? If the sequel, If secret agent starts the shutdown process while a job is running, it will wait for this amount of time. The default is 15 seconds for the job to complete. If the job is not completed by then, it will cancel the job. Continue shutting down the job set proxy account is a setting that lets you specify a non administrative proxy account to be used for running jobs under starting with sequel Server 2008 sequel agents supports multiple proxy accounts, so they only need to worry about this option when running Sequel Server 2005 or earlier. You most likely can accept the defaults for both of these sections. Settings. The connection page allows you to specify an alias to be used when sequel agent connects to the local host server, which is the sequel database engine. That sequel agent is running on. This is useful if you cannot use the default connection Options for some reason, specify Windows account for a sequel long into use. This is another section you most like. You can just accept the defaults, which is no alias or credentials. The last page history is where you tell sequel server How much job history the outcome of each job where they're succeeded or failed is locked to the MST be database, along with such information as the job name when it ran, who started the job, etcetera. The option to limit the size of the job of the job log is fairly self explanatory. You can select a maximum number of rows in this case, 1000 or end this maximum job history of rose per job. Last check box. Remove Agent History is a little bit in this leading if you check this and select to remove all job history older than a certain number of time of days or weeks. In this case, four weeks that history is deleted only once when you click OK and close this window. If you want to regularly purge this job history. So you regularly remove entries older than in this case, four weeks. You'll need to create your own recurring job that does this on a regular basis. Offer I do some code that does was in the last section of this course. Note that this is actually an important maintenance step. If you're, for example, if you're taking transaction log back ups on a regular basis, that's done by a scheduled drop. If you're doing backups every 15 minutes, when you're adding 153 rose to the job history log every 24 hours, technically, a little more than that because there are a couple other history tables involved. But needless to say, you're adding many records to the MST be database every 24 hours without some sort of purging process. Your mst be database will increase in size to unmanageable proportions. I've seen systems where the MST be database had grown to be gigabytes in size because log backups under running every five minutes for years without the job history being maintained , the first sub folder is the job some fold. If you open this management studio, you have a list of all the jobs to find on the system. In this case is only one this this policy first history that is a default job that secret server creates. You can also add new jobs by right clicking and choosing your job. Or you modifying existing job my right foot in the job and choosing properties and all going to showing you that Maurin the last section of this course next entry and sequel Server agent is the Job activity monitor. This lets you see which jobs are currently running on the server. Double quick that bring them a snapshot will show you all the jobs on the server in this case is only one right now. And if the running. There would be a little green play arrow over this icon, and the status would be running. This is a static view of the jobs taking at the point in time that you opened this window. If you wanted to update periodically, you can click on view, refresh settings and have it reset every X number of seconds. Note the warning, not refresh more than 30 seconds, or you may degrade performance. The next sub folder is the alert. Some folder sequel server allows you to set up some alerts when things go wrong. So when things were wrong, sickle agent will send out a notification to whoever you define instead of layers for specific databases or just for the server in general. You can also set up alerts for specific era numbers for any errors that have a certain severity. Sequel. Server grades, errors with different severity levels, and some are critical, meaning the server's gonna crash and some are more or less just warning messages. In general, I recommend setting alerts for errors that are severity 17 through 25 inclusive and also for air number 8 25 and in the last section of this course and I'll give you a script that was set those up for you and you'll see how that's done. There's with the severity of 17 through 25. Represent problems with your hardware or software. These errors, if not correct, it could cause your sequel server to crash for perform. Horrendously 8 25 is more of a warning than a true error, but it's something that requires attention. The sequel Server writes to disk it will try to write. It will try to write data up to four times before it gives up in reports of this care. But if it tries a couple of times, say two and then succeeds on the third try normally Sequels not gonna report a high severity error, and it will just keep on chugging the law. However, something's not right with your destroyed sequel tried to write data two times to the disk , and it failed before finally succeeded on the third try. This could indicate a possible impending destroy. I've or disc untrue oler faith failure, and this is what error Air 8 25 signifies. So if you put in a special alert for that particular error that will give you a clue that possibly your described is about to fail. If we were to set up a new alert, right click feelers full or choose new alert. And here you can give it a name type of alert. In this case, we're gonna be diamond secret server alerts. All databases would apply. Do obviously all the database in the system or on the server wide system. And here's the disparity levels I were talking about. You want to look at specifically 17 through 25 All of these you want to set up errors for and then in particular, the air number 8 25 which is when I just told you about the response page is how you want sequel server to respond. When it gets this error, you could have it executed job. So if you have a job, it says this air occurs. Let's log some diagnostic information. You can have it do that. You also haven't notify some operators and we'll talk about operators in a little bit. But basically they are accounts. That sequel server can send an email to or pager. If you still use pagers in the sort of information, the option page is where you tell secrets over what information, including the alert it sends out. So if you're getting a lawyer like, for example, 8 25 clearance in Texas, a secret server encountered they're a 25 you know, have the server name and the date and time of the record. And here's where you tell it to include that information in the alert and you can say, included only an email or included in the lower sicko email in the pager. You also have a net send. You have it use nets and a papa message upon somebody's terminal. I believe nets and is also going to go the way of the pager notification that has been deprecating as well. You put to delay between responses in here. Sometimes you'll get some errors, particularly like out of the space errors, where you will get an error and it will. If nobody fixes it right away. That error will keep recurring. And on a busy system, you could be sending out these errors every second or more frequently. So here you can tell sequel server. I know you're getting these very frequency, but don't send out an alert more than X number of minutes, or X number of seconds. The next note under sequel server Asian is the operator operator is a person who received the alert sequel Server agent sends out if you provide. If you want to set a new operator right, click the new operator and you can provide right a name. That's how you to refer to it. Within sequel server. He verified an email address, the nets and address to use or a pager again a za mentioned for the pager options of deprecating and they'll be going away. You can also set up schedules for the different operators. So if you have like an on call of rotation where certain people are going to respond to any alerts in certain hours of certain days, you can define that here. The Proxies note allows you to pride credentials for various subsystems to use. For example, if you have a job that runs Activex scripts in specifying account, it has required permissions needed to do whatever that Activex script does. This is another one of those items that most involuntary DVDs can just leave, as is, this last note is a sequel server agent error logs, and I won't talk about those more in detail in the next section 14. Management Features In SSMS: secret server contained some features to make managing your sequel server instance a little bit easier if you expand the management note. You see several of these. The first few are somewhat sophisticated options, and they're outside. The scope of this course are briefly described them so you know what they do and how they might affect the performance of Secret Service. The policy management option is similar to Windows Active Directory group policies. They allow you define and enforce certain policies for your secret service. Furthermore, you can create what is called a central management server that will monitor all the sequel servers in your environment and make sure they meet your configuration criteria. Such a standard bill configuration. For example, Suppose you want to make sure that all the databases on all your servers are set to use the fixed value for a database file growth and not a percentage value. You didn't define it. Rule for this and the Central Management Server will monitor all the sequel instances in your environment and alert you if it finds any databases that are not configured this way. You can also use a central management server to force the sequel server to change to become compliant with your rules. This is a very powerful feature that was introduced in sequel Server 2008 sequel Server Data Collector is a part of sequel server that helps you define collections of data to gather in order to perform troubleshooting diagnostics and historical comparisons. For example, if you want to see how the service before me now to how was performing two months ago, it's also beyond the scope of this course. Notice that the icon has a little red down air on this indicates that it's not running. The next item is Resource Governor This feature logic and control. How sequel Server Behaves. You can configure what are called resource pools, which are basically groups of similar similar items, such as processors that you can manage. For example, suppose you have one database that is absolutely critical to your company, and it always needs to be able to execute queries quickly. You can set up a resource group of processors and assign them to that database, so that database will always have some computing power available to it. No matter how busy the other databases on the system are again, this is outside the scope of the course, but note the Red Arrow indicating the resource governor is not active. If it is active on your system, you have uses complaining about performance. This will be one of the first places to check if this is your situation. I would recommend hiring consultant to come in and investigate this because an improperly configured resource governor can lead to very poor performance. Extended Events is a new feature introduced in Sequel Server 2008. It's a robust event monitoring system that can be used to help troubleshoot performance issues at the server level, the data base level or the individual query level. Many things and sequel server fire off defenses they happen and that this feature can monitor those. For example, when a database file auto grows, sequel server file fires often event and can be monitored from within SMS. If you're familiar with the old sequel server traced Tool called Sequel Profiler Extended Events provides enhanced functionality to profiler and in fact, in the future, Profiler will be retired and replaced by extended events, and finally, now we come to an item it will actually dig into, although we won't be going into it until our final section maintenance plans. A maintenance plan is a group of steps that for four means actions. On the surface, the's task include creating backups, defragmenting indexes, updating statistics, etcetera. The's maintenance plans can contain conditional branching logic. They can also be scheduled to run a specific times and our final course. I'll walk you through using a maintenance plan to create some jobs to perform backups of your databases. The next folder is sequel server logs. And if you expand this, just the entries for the current law. In the previous six logs by default sequel Server retains the last six logs, but this could be changed by right, clicking the folder and selecting configure. And here you can say you check the box and you specify how many you want to keep. A new log file is created when the sequel server restarts when the system reboots, or when steeple server is told specifically the cycle. The logs and that could be done is part of a maintenance plan. And again, I'll show you that in the last, of course, you can view the log and sequel server management studio by double clicking on it. You can also filter the log if you need to, and you can search the results. If, for some reason sequel server is not running, you can also view these logs on the server were simply text files. Sword in the Log directory under a sequel server Install folder, which is usually C program files. Michael Sort of sequel server. M s SQL. 11 for secrets over 2012 slash m s sequel slash lock. The current folder is named Error Log, with no extension in the previous six are named Airlock one through airlock, not six. The lower numbers of the more recent many of the items that are written to the sequel server Error Log are also written to the Windows application event Love. So that's another police, and I was sure you bring up Windows Explorer and here you can see the actual era logs on the disk. You can see either in C program files like a sequel server. Um, actual sequel, Server 11 on this case that stopped Demo 12 2007 named instance in a single server flock, and this is just a text file. If you would open this with no path. You would see this contains the exact same information you would see in the log file except the log file. It was ordered by date backwards. Where's in the text file? It's the oldest stuff at the top of the list. In addition to the sequel Server Logs, as we mentioned in the last section sequel Agent also has its own set of logs. If you look at those very similar to sequel Asian, the sequel server logs if you open enough and the same thing can filter search these air only log entries that are relevant are created by sequel server agent. And again, those log files are stored in the same place. They're just text files, and in this case, they're called Sequel Agent. The current one sequel agent got out in the older ones. A sequel, Agent one through Secret Agent Don nine. As with fi air logs, the lower the number. The more reason. And as with E sequel server logs, every time sequel agent is restarted creates in your LA file or when you manually tell it to cycle the log files, they will create anyone. Database Male is the sequel server component that handles sending out email. These emails can be alerts, but they can be just regular emails at a store procedure might send out information with emails, database Male uses accounts and profiles to send mail, and it must be enabled in order to be used to neighborly you simply right Click it Sarge. Enable it wasn't the right click it and choose configure database mail if it's not currently enabled, which, in my case it is. But if it's not currently enabled on your system, you'll see a little pop up window, and I'll ask if you want to enable it before proceeding and you just say yes. And that's all there is to it when you set up database male and it's already set up in my system. So I think if there's some screenshots, the first step is to create what is called a profile. In a profile is a listing of email accounts that could be used to send email. In most cases, that's only gonna be one account said in a database male up For first time, you can create an account as part of the profile creation process. You give the profile the name. If you want a description in there, you can. And then as part of the creation process, your next step would be to create an account. So you click on add and then next and click on next. You'd see this screen. When you create a mail account, you specify a name for the account, and you supply an email address, a display name and the reply email address. He was also supplied a server name of the mail server and what Ford accepts connections on If you're seeing if you're email server wires, credentials to connect, you can supply them here, and you'd enter them using windows authentication or basic authentication here. If you do not need to supply credentials, you can select anonymous with indication. To properly configure this screen, you may need to work with your exchange administrator or your email administrator to obtain the correct sex. Once the accounts a profile are configured, you need to configure profile security. You can restrict me a profiles to only certain Loggins, or allow any Loggins to use it. In other words, you can allow only certain Loggins to send email, or you can allow any law against send email. You can also specify a default profile, which can be used by everybody using to send email. Remember, profile really links to an account so you can set up like a general account. Four. The public using sequel server Or, if you have you want to make a special account that maybe only your sequel server agent can use you can set that up so only sickle agent can use that account. And then the last step in the Senate process is some system parameters that you can specify in most cases. Is defaults acceptable? I personally have never had to change these. So those are you. Just accept these two, and then you just click Knicks and everything set up then wasn't set up. If you want to test its in the test email, and in this case, I don't have everything completely set up here but normally say center test email in a pop up a little window and it will say, uh, it'll light to select a profile to use. And then you type in an email address to send the Test two. And then you said sand, and you'll get an email. Massive the next mode. The next node in the management folder is the distributed transaction coordinator, also referred to as DTC. The distributed transaction manager coordinates transactions than evolved more than one sequel server. Examples of this will be queries that use a link server core replication, which is a high availability option that will discuss briefly in a later section. As a brief aside, as I've kind of talked about before, when we're talking about transaction logs, a transaction is a unit of work in secret server, where each step must succeed or all the steps fail. This is the an asset properties we talked about in the beginning of the course. This this process is responsible for managing the transactions across all the servers involved, and it make sure they all succeed or they all fail, not rolled back or undone. There. No options to select here for the DPC in sequel server management studio and really only uses. I got to see whether it's running or not. In this case, it's not running has got the red stop. But the final note his legacy on it continued the sub folder called Data Base Maintenance Plans. These are different than the maintenance plans we mentioned earlier appear. These database maintenance plans were data transformation services packages or DTs packages , and these were created an older versions of Secret Service details. Pact is were introduced in sequel Server seven, and they expanded a little bit with sequel Server 2000. When Secrets Over 2005 Rizzoli's DTs packages were replaced with sequel server integration services, or S S S. However, if your secret, if your server has been upgraded from the older version sequel server, you may still have some DPS packages listed in here. 15. The Dedicated Administrator Connection: There's one more very important management feature that deviation know about on this. Actually, has you do not use as us a mess for this we're talking about. It's called The Dedicated Administrator Connection for a d. A. C. This is a special connection to sequel server that allows an administrator to connect a sequel server for troubleshooting even if sequel server is unresponsive to connection requests from other clients, including SMS Secret Server Does is by reserving one scheduler at all times to process one single thread from the D. A. C. No other processes can use this threat. So you've got a tiny bit of processing power that solely reserved for you. The D. B A. In case your sequel server becomes unresponsive. In order to use the D A C, you must use sequel servers. Command line utility SQL CMD. Additionally, by default, this only works from the server itself. So if you ever need to use this, you leave. They need to be physically in front of the server or connected via remote desktop. You can't have a change of configuration to allow uh, the D A C to accept connections from remote machines. The D a C. You can only accept a single logging at a time. It's also always single threaded so you can't run any paralyzed queries within it. But then this connection is only meant to be used for trouble Purpose E troubleshooting purposes anyway, so you shouldn't be running any big, massive flurries that need to be paralyzed in the first place. However, one consequence of this is that some commands a multi threaded by nature, and so you can't run them from the D. I. C to Italy is very important. Commands are back up and restore, so it's important to remember you cannot run a back up or restore command using the dedicated administrator console. Using this connection, however, you can issue sequel commands to the server to look at some of the management of use for perform other test troubleshoot system to connect to sequel server using the D. A. C. You must use a command line utility, and I'll show you that. Now open up a gas window, go to the been path of your sequel server installation, and you will enter uh, this command You have SQL Command, which is the program Dash Capital s and then admin cool in your server name slash instance . Name if you have one. The admin colon Tell sickle fanned to connect Using the D A. C and then dash capital. You is the specify user name. In this case, we're connected Essay, which is the system account in inner prompted for a password you connect in this point, you're a sequel command and you can issue, uh, any sequel commands you want on in this case was do it's like line. And then the type goes executed. And it says, if you want to look it up a particular query, you can type that query in here. One command that probably very useful is SP who you know this would do. Then I'll show you. Um, the formatting is, of course, really ugly Sadat's window. But it basically will show you, um, all the spins since the process or I d, um, status of this process. Loggins using this database names basically shows you all the all the connections a sequel server at this point in time and what they're doing in the database. And you can see all these background things. These are standard. Things will usually see this on every single sequel server you can see. These are, um we've got sequel, Agent Demo. This my sequel agent service That's connected to the sequel server Burnable s a these air me long day and I still have my management studio window open background so I can see some connections from there. But this is one way to get in and look at some data bases. If you can't get into sequel server by any other means and then quit this you type exit and you're out. Um, one thing to note here because the D A c only allows a single connection, you cannot connect to it. Using Sequel Server Management Studio, for instance, you cannot go in here and and type admin in front and try and connect. You can't do this because behind the scenes Secret server management studio makes multiple connections to the server just by default. That's the way it works. You can't change that, so you cannot use SS and mess to go to the D A. C. You have to use the command line interface, so it's kind of ugly to use. But if you can't get into sequel server any other way, that's kind of your your secret back door to get in there 16. Database Objects: Tables: So now we're gonna talk a little bit in the section about what type of objects the database can contain. The most common ones are course tables. There is also views. Indexes and store procedures for this section will be working with the Adventure Works 2012 database. This is a sample database that even download from Microsoft. Just go to your favorite search engine and Internet Venture Works 2012 All one word and you'll get the link to download it. Notice that Microsoft provides several versions of this database to download right in several different formats. I recommend getting the backup version, and that allows you to load the data base onto your server simply by doing a database restore, just like we did in Section seven. The other thing to note is that every object in the database has an owner by default. This is the owner of the scheme of the object belongs to. This has some security implications and will be covering those in section 12. So once we got the data base load, it will open it up and you can see objects in the database. Roving round tables, views, program ability, which for the breaks down sort procedures under there right now, we're gonna look at tables. If we expand tables, we can see all the tables in the database and there, named with the format schema. Name got table Name in this case, is that CBO got database lock. So db O is the schema of the table and database law is named the table. Here we have human resource is dot department. So human resource is is the scheme out of department is the name of the table. If you want to look at the content of a table, you can right click it and do select top 1000 rows. You can also find out some properties of the tables by right clicking them in new properties. And this will give you some basic general properties beneath the schema. Which portion intel from the list here, Um, permissions. If you've got any permissions on the staples such as certain Loggins are allowed to updated or read it or that sort of stuff, um, storage trains tracking is a option. That secret server has re contract changes to the data. I don't This will tell you whether that's enabled or not. And if you want to see some objects that are associated with the table, you can click on the little plus next, the table name in SMS, and you expand that now. When here you can see columns expand this. It will show you all columns in the table keys. You can see a primary key if you had any foreign keys. Define that be listed here. Concerns triggers indexes, and we'll talk about these in a little bit. An important concept in database design is the you know, before I get there meets take one. Quick aside. I told you, if you want to see, it wasn't the table you can right click and choose Select off 1000 rows. You go ahead and do that. In this case, you can see over on the right side of the screen. The top portion gives you the sequel statement to, uh, return the data and in the bottom will actually show you the results of that statement. In this case, there's only 16 rows, so I only got 16. There was a dated back. Always that top 1000. This top 1000 is a default by sequel server, and they put that in there just to prevent. If you have a table that had millions and millions of records in it, you wouldn't want to bring all those back because it would take a lot of memory. It would take a lot of time. And guns are You're not going to need all that data in secret Zahra management studio anyway, so this is kind of ah, performance option in here. You can actually change this in the secrets of her management studio options to come up with someone of the number that you want, um, again. And just really quickly some things that I might not have touched on before. 16 rows down here in the corner of single server manager studio. It will give you a count of how many rows were returned for your command. This object here, master, shows you which database were actually in, which is up here and now you notice, Even though I'm looking at Adventure Works Database, I'm actually executing this command in the context of the master database. However, when I specify the table to pull this data from, I'm giving a fully qualified name here, which includes a database name, the schema name and the department. So that's why if I took this out and I tried to run it, I get an error because there is no table. Name this in the master database. I could switch to adventure works and then run it. And now it'll work. And now you notice down Here also appear it shows the database I'm in and down here. This tells you what database this commands executed in. This is the user you're logged in under number of parentheses is your SP idea process idea for sequel server in the sounds of the server that you're connected to. And this time would be how long the command took to run in this case, less than a second. Okay, so getting back to the objects of the table an important concept in database design is the concept of a primary key. Primary key is a field, or it's a combination of fields that can uniquely identifying each room on the table. An example would be in Energy column with the identity option set. This simply means that this field contains an automatically increase in value. If you know this value, you can uniquely identify a single row in the table. So, for example, if I say give me the road that has department I d eight, there's only one of those in this entire table, so I'm always gonna return this room. Sequel server does not require that you created primary here on each table, although in most cases it's best it's a best practice toe. Have one. However, if you do not define one secret server, will, behind the scenes at a special column tip to the table that will store what is known as a unique a fire and that will allow sequel server to uniquely identify each row in the table . I would never see this column when clearing the table. If you look at the design and just do a select star, you'll never see it. But behind the scenes is there because sequel server needs some way to uniquely identify each row. If you don't give it one 17. Database Objects: Views: views are very similar to tables. In fact, in basic select statements, they can be used exactly like a table. You can think of a view as a virtual table whose columns and rows are defined by a sequel statement. That sequel statement can include joins between tables so you can use a single view to present information contained in multiple tables. Use air used for three main purposes. First, they can be used to simplify the database schema for the user. If you have a highly normalized database, you can present the user with a single table instead of several tables. For example, if we look at the human resource is dot V employees view, it looks like we're looking at data in one table, so this looks like one table. But if you were actually tow, look at the definition of this view, which weaken due by right clicking the view in choosing script as create two new career window, we can see this view. This is the definition of view we can see the view is actually created by joining nine different tables. From the user standpoint, all these joints are completely transparent, and it looks like they're just pulling from a single table. Another use for view is as a security mechanism, you can grant a user permission to review without granting them permission to the underlying table or tables themselves. And because of you is really just a select statement, you can prevent users from seeing data they aren't supposed to see. For example, if you have a table containing employee information, you might contain a column with Social Security numbers in it. Those that might be needed for payroll processing, whoever that information should not be viewable by everyone. So you can create a view that omits that column. And the users using that view won't see the Social Security numbers for those that do need to see those Social Security numbers, such as those who might work in HR payroll department, you need to create a different view than includes that calm, or just give those users access to the underlying table itself. The third most common use for views is to bribe backward compatibility for applications. Sometimes new versions of software changed the database schema by adding and modifying tables or adding or removing columns from existing tables. This has the potential to break sequel statements that expect the old database schema. You can provide backwards compatibility by creating a new view that has the same name and columns as the old table, but instead pulls data from the new table scheme. So far, we've only been concerned with how views work with select statements. However, update and delete statements can also be issued against the view with some important caveats. Any modifications must reference columns on Lee from one base table. The Collins being modified cannot contain calculated or aggregated values, such as those calculated by using the some average or similar operators. And the columns being modified cannot be used in any group by having or distinct clauses in the view definition. Bottom line is, in order to modify data through a View sequel must be able to unambiguously trace the modification back to a single table. Insert statements can also be performed against the view provided any columns in the table that are not included in the view have default values defined in the underlying table. That so when sequel goes to insert that role you're inserting, it has a value to put in those columns that aren't in the view but are in the table. It's important to note that views do not create a second copy of your data. The data is still only stored in the base tables that make up the views definition. This is what we mean when we say the view is like a virtual table. There is an exception to this beginning with sequel Server 2005 sequel Server supports indexes on views Just like tables of you can have only one clustered index and multiple non clustered indexes. Data for the indexes on our view is stored on disk, so your data will be sorted to places in the base table and in the Clustered index. On the view, I will point out there are a whole bunch of limitations that you have to meet before you can create any kind of index on views. A discussion of these is outside the scope of this course. Furthermore, we're also getting a little ahead of ourselves because we haven't even talked about indexes yet. Let's look at those in the next lecture 18. Database Objects: Indexes: Section 10 model to worry about indexes and views, mainly indexes. Indexes are very important objects. They allow sequel server to quickly locate specific rose or a single specific row in the table. Think of an index on a table is being analogous to the index in the back of a book. It helps you look things up faster. There are two different types of indexes you can have Clustered index and a non clustered index. You can only have one clustered index per table. Clustered Index defines how the data in the table is ordered on the disk, and the reason you can only have one is because you can only store the data on the disc in one particular way. So Clustered Index is not required for a table, but it's highly recommended. And if you don't have a clustered index on a table, that table is referred to as a heap. The column or columns that make up the Cluster index are called clustered Index keys. Clustered index keys should be narrow, unique, static and ever increasing. It needs to be narrow, meaning a small data type, because the clustered index key is duplicated in every non clustered index you have on the table, so you want to keep it small. Space clustered index key shots to be unique, and that's so it can uniquely identify a road. And I will prevent sequel server dramatically. Nneka Fire Teach Row, which I talked about last module. Every room in a cluster index must be unique for sequel server so it can identify a particular row in a table. The other thing static that costume names key should be means it should not be changing very much because the Clustered Index is how data is stored on the disk. If you're updating the Clustered Index key very often, then that means sequel has to move that data around on the disk very often, and that's gonna result in poor performance. So you want something that does not change very much is static in the last one, you'd like your clustered in next keys to be ever increasing and thats so the value always goes up. Sequel server always will add a new row of data to the end of the data page that it stores that data on, and that's to prevent sequel server of mounting to move other rose around toe. Make room to insert row in middle of a page. The's air. Our best practices. A non clustered index is any index on a table that is not cluster index. Great, that's pretty simple. Okay, primary keys that we just talked about are often chosen as the clustering index, but they don't have to be goods or globally unique I ds. And that is the unique identify our data type in Sequel server make absolutely horrible. Clustered Index keys number one They're big. They're 16 bytes wide. The energy field only takes four whites and supports up to two billion numbers or four billion if you include negative numbers. So if you're gonna have more than two billion rows of data, you can actually go to the big data type, which takes eight bites that can handle to to the 63rd power Possible values. And that's still only half is big as a Google data type. So you're still saving tons of room. You combine this with sequel servers entered your property, and you can get an ever increasing narrow number, which is perfect to use as a clustering index. Let's go into this a little bit more detail, and this is something that, probably as a DB a might not be something you have a lot of control over. Usually it's developers that like to use fluids as custard keys, but this will at least give you some ammunition to go back to them and say, Hey, don't do this. It's gonna cause horrible for Former Chases John Sequel Server Um, And so I'm gonna go through and explain this to you. So it's looking the space saving this weekend obtained by using an integer value instead of a good let's assume we have a table that has one million records, one clustered index and to non clustered indexes. If we use a four fight imager as a cluster, key will use four megabytes of disk space, which is four megabytes or four bites for the energy field times. One million rose. If we were to use a do it, that's 16 bytes. Tens of 1,000,000 rose that 16 megabytes of data so using the energy data types A's US 12 megabytes in the clustered index. But but wait, that's not all. But also keep in mind, the Clustered index key is included in every non clustered index on the table. We have two additional non cluster indexes on the table. That'll save us an additional 24 megabytes in the non clustered indexes for a grand total of 36 megabytes. That's just for one single table. If you use a good instead of an imager for your clustering index key, that's additional 36 megabytes of pure overhead as sequel server has to manage back up defrag store and memory, etcetera. Now the goods might be useful as a primary key, especially if you're aggregating data from multiple databases or servers. But they don't have to be part of the clustered index Now. Over time, as data is updated inserted, deleted indexes become fragmented. This is similar to how files on a hard disk American fragment and just like that's bad for performance on your hard disk. Fragmented indexes are bad for performance on your sequel server, especially for queries that returned multiple rows of data and that perform scans of entire table or an index. This is because the index pages are not in order and the destroyed head must jump around to different sectors on the desk when reading the table quarries that returned a single row aren't so much affected by fragmentation because the discreet had has only one road to go to anyway. There are two types of fragmentation. Sequel, server, internal or page fragmentation and external or extent fragmentation. As I mentioned in passing the previous section sequel Server stores data, whether it's table data earned next day, that in an eight K page called an Extent a page or an extent is made up of individual rows of data. Internal fragmentation is the ratio of how much space is actually used on a page compared to how much space is available. So this number you generally want to be high. The terminology here is somewhat confusing. If someone told you your internal fragmentation was 101 100% that sounds really bad, right? I'm 100% fragment. Oh, no, But actually that's good. In most cases, there are some exceptions, and I'll talk about those. But for this reason, Secret server management studio. When we go and look at that using that, it refers to this value as page fullness, rather internal fragmentation, and that kind of sounds more logical. Its more descriptive that this toe what it really is. External fragmentation is when those a K data pages are out of order, meaning on the disc. Page two does not immediately follow Page one. This number You want to be low, so you can also have a combination of both internal and external fragmentation. So here's an example of a nun, fragment and index. You can see all the pages are filled to capacity, so there's no internal fragmentation. Or rather, your internal fragmentation is 100% or your page fullness is 100%. The pages are also in order page 123 and four. So there's no external fragmentation. Now here's an example of internal our fragmentation, the data pages air still in order. But each page has some empty space on it. Here we can see external or extent fragmentation. You see, the individual individual pages are full, but the pages themselves are out of order. You have page one on page three on page two on page four and finally here. We've got a combination of both. The data pages themselves are not completely full, and the pages are out of order. So the question is, how can you tell if your indexes are fragmented in secret server management studio. If you right click on the index and choose properties, you can then go to the fragmentation beach and on really large tables. This page may take a while to populate this page is is pretty or the staples pretty small. So comes up right away. Um, the page fullness value. Here we have 70%. This is our internal or page fragmentation. The total fragmentation is actually your external or extent fragmentation, and he will see 27%. So our pages on average in the stable are 70% full, and 27% of the data pages are out of order on the disk. There's also a system view you can use to view this, and I will give you this in the supplementary file for this module. But if you run this, this actually is looking at all the tables in the database and you can see it will just take this 1st 1 the address table, this index name, uh, it has 0% external fragmentation, 19% internal fragmentation, meaning its pages are only 20% full here. The one above it. You can see this is actually 98%. Internal fragmentation is those pages are very full now. In some cases, internal fragmentation is not at all bad, so recall that this is a percentage that tells you how full your pages are. If your table has many inserts happening and your clustered index is not a simple, increasing value, you may want to leave some room on each page to comedy. These newly inserted rose This prevents sequel server from having to relocate. Some rose to a new page in order to fit a new record onto that existing page. That process is known as a page split, and that's up generally an expensive operation sequel server because it involves movie lots of data around. So if you set up your fragmentation internal fragmentation to be like 80% that will leave you with some room on those pages. On the other hand, if your tables are in a reporting database or a history table, that's rarely if ever updated or added to, you may want to set your internal fragmentation to be close to 100% so there's no wasted space at all on the table as we mentioned. The internal fragmentation value is also refer to as the page fullness or fill factor. You can set the target value for this at the server level and individual index level. This is kind of the goal that sequel server tries to maintain For data pages on the index by default, sequel server uses a default index fill factor of zero, which is the same as 100%. This means by default sequel. Silver will always attempt to fill the data pages toe 100% capacity. You can change the server wide by right, clicking on the server name, choosing properties and going to the database settings page. And here you set the default in next fill factor. Some people like to set this to value of 80 which means data pages will be 80% full. That leaves 20% room for new records to be inserted. If you want to change this, if you want to leave it, say 100% overall. But there is one particular index. You want to change the sign. Hey, knew this going to that particular index choosing properties and going to the options page , and you will see down here. Fill factor. You can set it there. I know that if you change the value in either one of these places, it will not cause the index to be automatically updated to use that value in order to do that, in order to get it to use a new value, you have to either reorganize or rebuild the index, which is a process, and I talk about in section 14. One last note regarding fill factor. In addition to how the data stored on disk fill factor also affects how the data is stored in memory. Sequel server caches, memory caches, data in memory in units of pages. So if you're tables, index or data pages are only 20% full, they will not pages cast in cash into memory. That copy of the page of memory is only 20% full. 80% of that's gonna be empty, so 80% of that rand is not being used. So this obviously is not a good use of all that expensive ram. You've got your sequel server, so choosing an appropriate fill factor is someone of balancing act between good disperse formance and good memory utilization. Because indexes help sequel server access data faster. You may think it would be helpful to add lots and lots of indexes the tables. This really isn't a good thing to do all those sequel Server 2008 and later you create up to 999 non clustered indexes on the table. This is on was never a good idea. Every time data is inserted, updated or deleted from a table sequel server must update all the indexes on that table. If you have a ton of indexes on the table, that's going to slow down all your data modifications, too few indexes slowdown data access, and too many are going to slow down data modification. So determining the correct number and types of types of indexes on a table, it's almost something of an art. And it requires careful analysis, not just for a single query but for the majority of the queries hitting that table. Don't just simply add indexes, willing nearly two tables and never at it index for each and every column in a table. The next Operation Winter type talk about is a view and of you is simply a select statement . President one or more tables as a single table, this last for simplification inquiries because even reference what appears to be a single table in a query. Although it may actually be a complex joint of several tables, this is also waiting for security. You can deny Loggins from accessing actual tables using a view, but you can still get them, writes the view itself. So suppose you have unemployment table that contains employee names, addresses and salaries. You probably don't want people to be able to view all employeessalaries, so you don't want to give them to look at. Give them permission to look in the employment table directly. Instead, you can create a view, then includes the name and address fields of the employment table, but not the salary call. And then you give users access to that you instead. And that way they can see the employee names on addresses, but they don't see salary information. Show you what I'm talking about about joining tables interview. Let's look at one of the views here in the French works database. Look at the human resource is not the employees. If you if we right click the view and view the top 1000 rows been seeing data? Looks like it just came from a single Tate fact. Here's our select statement. It looks like we're pulling from a single table and we get all these rows back. It looks just from query point of view. It looks like it's just coming from one table. However, this view is actually composed of several tables joined together, and you can see this. You can see the view definition in Secret Server management studio by asking it to script out the definition of a view. Two. New window. The script for you create two new Create your Window and here we can see in the sequel Command to create this view. It takes all these fields and we're joining all these different tables together is actually joining nine tables together. So whenever a query references this one view, this human resource is not the employees of you. Under the Covers, sequel server executes this big select statement and joins these nine tables together and presents the results to claim as it's just a single table was being queried. Something to know is that for views, the data is not actually stored in the database twice. Sequel doesn't store one copy in the base tables, and at one copy in The View has been created. The View is simply a sequel statement that secret server running run against the base tables whenever it's called. There is one exception to this. With sequel Server 2005. Microsoft gave us the ability to create indexes on views. There's a whole slew of restrictions and caveats that must be met in order to do that. But if those conditions are met, you actually can create a clustered index and non clustered indexes on views in order to create a non clustered in next interview. In this first, create a clustered index on the view. Some but not all, of the restrictions on creating a clustered next interview are. Have you must only reference based tables in the same databases. The view New cross database Select statements are allowed. The view cannot reference another view. The view cannot contain fields of the text in text or image data type data type of float, maybe including the view, but it can't be included in the cluster key, and there's a whole bunch of other index reference restrictions and to find those you can look at Secret Service books online uncreative clustered, creating index views When a clustered index view when a clustered indexes created on a View sequel server does make a second copy of the data and stores that as a clustered index on the view, this means Sequel Server writes the Clustered View Index to the disk. So using Clustered Index on I've you will incur some overhead for the maintenance on this index whenever data in underlying base tables is modified, there's also no guarantee Sequel server will use the custard index view with compliant inquiries that referenced the view. The query optimizer will determine if it is more efficient access of you or the base tables directly when creating an execution plan for the query. In short, as with indexes on tables is best perform analysis to make sure the index will actually help your queries before you add 19. Database Objects: Stored Procedures And Other Objects: the next type of database object when I talk about are stored procedures, and those have found under the Program Ability folder in the database. Tree stored procedures are similar to subroutines to use another programming languages. The pieces of sequel code as sequel server has pre compiled for faster execution. Start procedures can take on input parameters and can return rose and or output parameters like views. They could be used to provide a level of security to prevent users from accessing tables. Begin deny access to users on a table. The grand execute access to a store procedure that accesses that table, and the users will still be able to run the store procedure As a brief example of the store procedure. Let's look at the USP. Get employees managers store procedure in the eventual Works 2012 database. We can see the definition of the store procedure by right clicking the store procedure Name . Choosing script store procedure. Create to query editor window. You can see here the store procedure takes a single inside your value as an input parameter in this case business into T. I. D. And this is the business entity ideally wants more information about. After that, you can see the select statement that the store procedure executes and you can see in the where clause here, This is where our input parameters used to filter the output of the seleccion. So the executed query, you can simply type the name of the query. And if there any important parameters, you give any input parameters. If there's more than one you separated by commas in this case is only one. And if you execute this either by hitting a five or putting execute button, it will execute that select statement in return information like you see here, one little quirk of sequel server is that when you're executing store procedures, if the stored procedure name is the first statement in a batch of statements, you can just type the store procedure name and run it. If there is a statement before there, you cannot do this. You get an error, you need to type in the keyword exact before the store procedure. So, for example, if we had a use statement in here first, and we did not use the exact and we tried to execute, this will get an error because now the store procedure is the second line of this batch of statements, so we have to type in exact to cause that to run. Now you can use the go keyword to separate your statements in two batches. So now this becomes one batch of statements, and this is a second batch of statements, so here we can not use. The exact keyword will still work correctly because now the store procedure is the first statement in a new batch of statements. Usually, though, you want to get in the habit of just using exact whenever you're trying to run a store procedure, because that eliminates exactly, always work. So you don't have to worry about possible errors coming up, whether it's the first statement or not. So that's just kind of a good program. Practice the other store, Thea. Other folders will talk about briefly. I'm not going in detail because this is not really a programming course. I'll just tell you what these are. Functions are so called user defined functions to distinguish them from sequel servers built in functions and ah, user to find function is a T sequel routine that can accept parameters, do some work and return the result as either a single value or a table. User. Defined functions can also be created using a common language runtime routine instead of t SQL. So if you have programmers that are more comfortable writing in C sharp or dot net, they can create a function in there and loaded into sequel server as a common language. Runtime triggers are commands a fire off in response to some sort of action. When triggers are defined on a table or view, triggers can be defined. Fire off when data is updated, deleted or inserted. Trigger can also be defined to execute after the original update. Two litre insert command completes or it can run instead of the original, updated, deleted or insert command that fired it. The user that issued the command that caused the trigger to fire fire does not receive any norm notification that a trigger executed unless, of course, the trigger was written to do so. Triggers are useful for logging changes to data. They're also somewhat insidious simply because there's usually no indication that they fired. And this is especially true for instead of triggers, which can cause data to be modified or not modified in ways the original statement did not specify. For example, if you have a statement, a trigger and insert trigger that will fire after a new rose inserted, you might have that trigger. Log that data into a separate logging table so a user comes along. They insert a row of data, and it goes into the main table. But that road date has also written into a logging table the use that issued the original insert command. I will never see that, though, will not know about it. Same thing. Triggers are often used for logging. So if you're doing like deleting data, the user that will delete the data, there might be a trigger on the table. A delete trigger that will log that deleted data into a different table before it's actually deleted from the main table sort of as a history table. Or you might have audit requirements where you have to keep track of this information. The other thing that is very frustrating at times is when they use instead of triggers, so you'd have a instead of trigger defined on a table for an update, and you could say if someone issues an update to this table instead of doing the update. Do something else. And so what happens is someone comes in and the issue an update. But instead of doing update, Sequel might do something else, and they do a select from the statement again, and they see their update didn't get put in and they get really confused. Um, so this thes types of things caused lots of headaches when you're trying to troubleshoot. So if you see behaviour like this where things don't seem to be doing right, it looks like sequel Server is not following the commands that you're telling it to. One place to look at will be triggers. A database trigger is a trigger that fires off in response to an action that happens at the database level. Such Aziz dropping a table, creating a table altering table definitions are connecting to a database these air also useful for logging purposes. In general, the fewer triggers that exist on tables are at the database level. Better the system performance. I have seen some people who have written triggers to send out an email when data is changed , so you've got a salary table and every time somebody changes somebody salary. You want an email to go to the HR manager. This is a really, really bad idea. Sending email from Sequel Server is not as a user it seems to be fairly quick, but from a computer standpoint, it takes a fair amount of time. So if you got a bunch of people updating this table in every update since at an email system, performance is really gonna make you suffer. Additionally, if there's something wrong with database male, if it's not configured correctly or if your mail servers down a trigger such as this could cause a problem because sequel server can't get the mail out, the trigger is gonna fail, and probably the whole updates gonna roll back because it couldn't complete. If you're in a situation like this, where you want to send out an email when somebody changes something in a table, the best way to do that is to create a separate logging table and then have the trigger right that update right, possibly the older new values into this logging table and then have a job. That sequel agent runs maybe every five minutes every 10 minutes every day. Whatever your business needs are toe look at that logging table, and if there's any new records in there, then send out the email. That's usually the way people wanna. You wanna go when you're doing this information Just to keep secret server itself running as quickly as possible? The next group of objects is assemblies, beginning with Sequel Server 2005. Secret Server provides support for assemblies and assemblies are compiled. Managed code pieces written outside of sequel server examples would be a DLL written and .net visual, C sharp or some other programming language by Righteous during the DLL as an assembly and sequel Server T sequel commands can access the functionality of the Diello. And when he registered the Assembly in Sequel Server, you need to tell Sequel server if it needs access to Resource is outside of sequel server, such as files or industry entries, etcetera, you can. Also marking assembly is unsafe, which means which you should do if you don't know where the source of the assembly is. And that just sequel server in that case will give it a little less, um, permissions to do things that just makes things a little safer assembly is why they're useful in certain situations can cause poor performance and even possible memory issues. If they're not written correctly, so use assemblies with caution. All those secret servers ships with many pre defined data types uses also have the ability to create their own data types, and these can be found in the types folder of the database. Rules specify what type of acceptable values are permitted in a column in a table. Rules are deprecate ID and will be removed in a future version of Sequel server, so do not use them for new development. Rules have been replaced by Chuck Constraints on tables, so we need to find a table. You define a check and strength that will tell it what type of data each column can have. Defaults are another deprecating feature that will be removed in a future version of Sequel server. He's used to be used to specify default values for columns of tables. Instead, there's a now specified using the default keyword with the create table statement. So now the actual table definition, it's where you put in the default values rather than as a separate item plan guides they changed execution plan of queries about changing the text of the underlying query or store procedure. This is something of an advanced performance tuning option as a involuntary db A or JR D be a you're probably not gonna get into this Sequences are a new feature in Secret 2012 that provide a way to generate a sequence of numeric values. According to some rules such Azad one to the previous number, etcetera. This is useful when you're generating output that will be paged in some way, such as on a Web page or report. So if you got a Web page that shows you 1st 10 pages of a query and click the next button to show the next 10 pages, sequences are a great way to implement that. There are a couple other object data types in databases that I'm also not gonna get into about school. Briefly, the's object types are users service broker in points on the other various items showings here. These are more advanced features, so I'm just gonna point out what they are rather than go into them in detail. Surface broker is a messaging system that is built in the sequel server, and this allows developers to write programs that can communicate between databases without having to create a messaging system on their own. Service Broker includes service Broker objects include in points, which are the transmitting and receiving objects of messages, the messages themselves. And message queues, which are lists of messages that are waiting to be delivered. The storage sub folder shows some additional storage options her objects used by sequel server, such as full text catalogues, which we mentioned briefly in a previous module on functions such as partition, partition functions, partition schemes, which are used to help manage very large tables. If you have a table that has millions of rose in it, you can tell Sequel server all the roads that have this column between Values X and values . Why I want a store in this data file on this drive and everything else I want to soar in a different data file on a different drive, and this is done usually when you're looking to improve performance in some way. It also helps assist with back up in recovery time. You do a partial recovery, which was a enterprise only future. We can bring parts of the data, but it's back online for everything else to speed. Speed up the recovery and people get into the database faster. But again, those air are somewhat more advanced. Topics we're not gonna get into those last folder is security, which I'm going to talk about in the next section. 20. Security: Built-In Security Roles and Schemas: sequel server security model. Let's us decide which users and processes can see what data and perform what pass within sequel server. As we first saw in section for when we connected to Sequel Server, we can connect in one of two ways, either with Windows credentials or with a sequel server, Log in and password sequel Server can be configured to accept only Windows credentials or both Windows Credentials and sequel server Loggins. And this last mode is known as mixed mode authentication. You can see which mode your servers operating in by looking at the security page of the server properties configuration screen. If you right, click your date of a server, good properties and look at the security screen, you can see I am operating here in sequel server and Windows authentication mode under the Log on editing Auditing section Intel Sequel Server If you want it to log, no Loggins at all only failed Loggins only successful bargains or both. Whichever one you select. The logs will appear in both the sequel, server log and the Windows application event log. Except, of course, we choose no auditing, and in that case, nothing show up enable server proxy account is a place we can specify an account to use when the XP command shell extended sore procedures executed. This store procedure allows the calling log in to spawn a Windows Command shell and issue commands directly to the operating system. This can be a security risk and sequel server ships with the XP command show procedure disabled by default if you need to enable it, you can enter an account here with restricted privileges, and that account will be used with Windows Command. Shell. Respond if the procedure is enabled. XP Command Shell that ISS and no proxy count is entered in the window shell that despond runs under the context of whatever account the sequel server engine is using. So when you set up your sequel server service to run you specified and account to use, that will be the account that is used when the window shell respond. These last three options here, Common Criteria Compliance option enable sequel server to work. It enables certain elements within sequel server toe work with Microsoft crime and criteria security protocols in order to meet the eye. So 15 408 security standards. His option is only available on data center enterprise and developer versions of Sequel Server. If for some reason you do need to implement this, know that there's also script you must run to completely configure Secret Server to comply fully with the level for plus for a level of specifications. You find the script on Microsoft Sequel servers Common Criteria Website. Enabling See to audit tracing. We'll log all attempts to access objects on the server. It writes the slog to the Default Data Directory of Sequel server. Note that this feature has been deprecating and it will be removed in a future version of sequel server. You should use the common criteria compliance option instead. Cross database ownership. Chaining option allows ownership chains to cross database boundaries. We haven't really talked about ownership chains. So what is an ownership change? Every object in Sequel server has an owner by default. This is the owner of the schema that the object creative belongs to. If you think back to the tables and views we saw in section 10 1 of the views was called Human Resource is dot the employees in this case schema is human. Resource is on the owner of the schema is de vio. Show that to you here. If we go into adventure works and we look at use, we can see this is the table Human resource is that the employees the human resource is is the schemer. And then if we go look up the schema to look at the properties on schema, we can see the owner is GPO. BBO stands for database owner So in this case, the owner of the human resource is that the employee view his Debbio. Now, if we have a store procedure that references this view and that store procedure has the same owner, then when sequel server checks permissions to see if a user has access to both objects, it will only check. The first object has an ownership change exists between the two objects. They have the same schemer and owner. So sequel server knows Hey, there's a chain here. I only have to check one of these. Normally, ownership change can only exist on objects contained within the same database. However, if you enable this option, cross data base ownership chain sequel server will allow ownership chains to be created across databases during this option on here enables it at the server level so it would apply to all databases on the server. You can also enable this at the database level on individual databases. Now. This could be handy if you only want. Allow chaining between two specific databases on the servers and not others. Note that the system databases Master MST be and tempted be require across database ownership, chaining to be enabled, and it's set up by that. It's set on by default. As we've hinted at before, there are many security options that could be set both at the server level and the database level. A good example is Loggins, in order for a user, could connect to the database that using must first have a log in created at the server level. That log in then must be mapped to a user at the database level in order to have access to that database. This allows you to have users be able to connect to the server to some databases but not others. You also get more granular in the science security permissions at the database object level , such as tables or views, but managing those tends to become something of an administrative nightmare, so that's not recommended, if you can avoid it. If you're using Windows or mixed mode authentication, you can also create server level and data best based level. Loggins, using a Windows Active directory group, any member of that group commodity server and will have the rights assigned to that group. Bargains can be disabled at the server level. Disabling the log in will leave all the define permissions for that log and intact, but the user will not be able to connect to the server. This is handy for cases where, for example, you may have a sequel logging that's being used in production, but you don't have a password for it. If you delete that long in and later on, you find out something was still using it. You would not be able to recreate it because you don't know the password by simply disabling the Lagan. You can prevent that account from accessing the server, but if you need to re enable it, you can re enable that without knowing the password and a disabled. Logging is identified by a red downward facing arrow in sequel server, So if you go and look at the security at the server level. We can see I've only got my one. These all these are the anti service one. These are default Loggins if I wanted to. Here's account here. If either properties and go to status, I can disable this log in. And if they do a refresh you see now, it was a downward facing error there. That means that lock in has been disabled. But all the commissions that were assigned to that in all the other databases and on the server still exist. But I just can't log in, go and enable that. Now we do refresh will see, it becomes enabled again. I should know I'm logged in with my essay account, but I shouldn't know that many best practice recommendations suggest disabling essay account because this is a well known account that exists on every Microsoft sequel server installation. Before you disable this account, be sure to create another account that has full sys admin rights on the server. Because if you don't, you're gonna lock yourself out. Sequel server comes with several pre defined server roles, and I'm going to go through them here and I'm gonna cover them in the order of least restrictive to most restrictive. So may look like I'm jumping around as we move through this list. Here. Ness is a mess, but not just because he's air listed alphabetically, not in border of permissions. Sys admin is the highest level of security can have in sequel server. Members of this role can do anything on the server. They can drop databases, create Loggins. They can shut down the server. Basically, that's the keys to the kingdom. They could do whatever they want. Server admin allows members to modify server white configuration settings as well. A shut down the server Security admin is a role where anyone in that rule can manage Loggins and permission for all users both at the server and the database level, provided that person has access to the database. Note that members of this role can also assign Loggins to the sys admin role. So you should consider this role as being equivalent to sys admin. Members of the processing admin role can in processes than a running an instance of sequel server. This means members of this role have the ability to kill queries running and sequel server . This can sometimes be needed. If someone accidentally or purposely issues a command, it'll take hours to execute and or is preventing other queries from running, which is situation called blocking? This is a role is rarely used in practice. I know that because members of this rule can disconnect use from the server, it should. It could be used maliciously in a denial of service attack. Members of the set up admin role can add and remove linked servers. A link server is a connection to a secondary sequel server where you can issue commands against that sequel server from the current sequel server that you're connected to. This is another role. It's rarely used in practice, and it's usually a DB a Who has to sad hman rights that sets up any link servers book admin members can run the bulk. Insert command to load large amounts of data into a database just generally involves accessing data stored and files outside of Sequel server, which is why Microsoft created the separate role for this Perfect because it involves accessing objects outside of sequel server on the file system, which may require special privileges in the operating system. No, that in order to load data into a table using well, concert. The user also needs Insert writes on that table. Or they have to be a member of the D. V Data Writers Group, which we'll talk about shortly. The disc admin role allows Loggins to manage disk files such as backup files. Members of the Devi creative role can create new databases on the server. As the database creator, they become the owner of that database, and they have full rights within that database. The public roll is a roll that every sequel server Log in belongs to by default, and you cannot be removed from this rule when logging tries access a database or object that does not specifically denied or permitted access to. It's the public rules permission that are used for validation moving down to the database level. We have another set of pre defined roles again, and I'll go through these from at least the most restrictive DP owner is the most powerful role in a database. Members of this rule can do anything within the database, including dropping or deleting the data base. This is the database equivalent to sys admin. It's like being sys admin with owner within your own little kingdom. The database members of this world can change database configurations and delete objects and data. Db Security Admin is a role that allows members to manage permissions at the database level , including adding users to other roles similar to the server level security admin role. This will should be equivalent considered equivalent to DB owner because members of this role can add users to the Devi owner role. Usually the D. B A is the one who handles accessing databases, so it's pretty rare that this role is used. Members of the Devi Access admin role can determine who has access to the database. They can add and remove users. It's very similar to the previous Devi security admin role, except rather than being able to add users to other roles, the DVD Access admin role allows members to grant or deny access to the database. Again, this is something usually that d be a does, so you'll rarely see this role in use. Members of the DB backup operator role can create database backups. I should point out that this Onley involves only applies to native sequel server backups. Most third party backup programs require sys admin level permission. Stop ary back up so usually awesome but handled by a scheduled job. So this is another rule that you really see used. D B d deal admin can execute any D D L command or dated definition language commands within the database. This is a fancy way of saying that members of this rule can create, drop, alter, truncate or rename any object in the database. Members of this role can drop tables, create views, three names, store procedures, etcetera, Devi Data writers can write data or insert data an update data to use their tables in the database. Similarly, de MI data reader can issue select statements against user tables in the database. These last two rules denied Data Reader and United Data writer do just the opposite. Members of these rules are prevented from inserting, updating or deleting data from user tables in the case of data writer or they're prevented from reading data from tables. In the case of denied data, Reader and of course, public rule also exists in each database, and all uses of the database belong to that role by default. Database rules are used to grant permission at the database level, and they applied up all objects in the database. You can also set permissions at the individual object label. For example, you can deny read access to a single table for a particular user. But as I mentioned before, that starts becoming an administrative nightmare if you start going down that road, so probably try and avoid that if you can. In sequel server negative permissions always take presidents over positive ones. So if you have someone who belongs to the DVD Data Reader Group, which gives him permission to read from all tables in the database, but that person has been assigned a deny select position permission on a particular table. That person will not be able to see what's in that table because despite being in the Data Reader group, they've got a denied permission on that particular table, and the deny will take precedence. Application roles are user defined rules that can be used by applications that connect to the database. They allow you to sign permissions toe applications, some later how you to sign permissions to use their log ins in practical terms, what this does is it moves the authentication of the user from sequel server to the application is up to the application to determine if the user is allowed to connect to the database. The application Role Functionality is disabled by default and sequel server. In addition, in order for application rules toe work assuming that they've been enabled, the connecting application must provide the log in and password to sequel server. This composes security risk because it means the password must be stored on the client machines somewhere in order for the application to get it in practice. It's pretty rare to see application rules and use almost service. The next section of the database security folder is Scheme Us, and we talked about He's a little bit before in sequel Server Schemers can be somewhat difficult concept to grasp, especially if you're used to working with sequel Server 2000 over four, when there were no schemers. When an object sets the table is created, it used to be owned by the user that created it. For example, in Sequel 2000 if the user George created a table called My Table, it was owned by the George User has also meant that if the George user account needed to be deleted, such as if George left the company, it could not. That table cannot be deleted by other users. Excuse me, That log in the user, George cannot be deleted by other users. That's because my table was owned by George, and it could only be deleted by somebody by us. This admin or somebody has DP owner rights In Sequel Server 2005 Microsoft gave us scheme us, which simply separates uses from the objects that create. Now when an object is created, the owner of that object is schema the owner belongs to. For example, if George Jones the schema called George Schema and creates a table in my table, the owner of the my table will be George schema. Now, if the user George has to be deleted, it can be deleted without affecting the table or any other tables that the user Georgia might have created. A schema can Onley be owned by one user, but one user can own multiple schemers because there's now a separation between users and objects. We need to have a way to be sure. Every user in the databases mapped to a schema so that any objects that using my create belong to a schema. This is done by creating a default schema for the user. Schemers now mean we also have to have a more specific way to identify database objects. Is that it? Just supplying the name. We can supply a schema and the name This is done by referring to the object as schema dot object name. Note that because schema is part of the name, we can potentially have what appears to be multiple objects in the database with the same name. In reality, they will have different schemers. For example, in the eventual Works database, we can see that the 1st 3 tables are owned by the devious scheme. Next couple owned by Human Resource is it's more owned by the person schema, etcetera. 21. Security: Schema Demo: Okay, let's go through a little demo of schema as and how you can have tables that are named the same but with different scheme us. So the first thing we're gonna do, we're gonna create a new law again. I'm just gonna call this test commit get sequel server authentication. Well, just give it a password. Can will inject this just for the time being. And here we're going to assign the eventual works database. We will give him data reader, data writer and a deal admin this. Allow us to read and write two tables in the data and also create new tables which I'm going to do Default schema. I could type something in here. I'm gonna leave it blank and I'm going to show you what will happen. Leave it blank and we come back and look at this. We can see familiar blanket defaults scheme of tvo. So now if I go looking adventure works in security users. I can see sequel automatically because I told sequel at the server level. I told it that it this long and has access to the Adventure Works database. It automatically created the user for me at the database level, so that's pretty nice. So now what I'm gonna do is I'm going to connect the database using the, uh, logging that I just created, which was test, and I'm now connected. And we can see over here the Dashwood seem connected as a test account. So, like this, we're gonna go to mention works database and what I'm going to do now, if you look in the eventual works database, you can see we have a table in the scheme of human resource is called department. I am gonna make one a new table, also called department. I'm not going to specify a schema. And because I'm logged in as this user test and because my default scheme up for this log in is db o, we'll see when I create this table. Do A refresh is now a new table here called db o dot department. That's table I just made. We still have the human resource is department. So we have two tables that are named the same, but because they're in different schema as they can both exist in this database. Now, when we use and just insert a row of test data into the table I just made. And here I just want to show you how. If you do not specify a schema name, you'll get the table from the your schema that you're logging homes, fighting intellect. Select star from just department. He returns a single row, and this is the data that it just inserted. So it is pulling from this table. If I actually specify the scheme a name with a table, it's now returning all the data that's in this table. And that's because here I specified schema. So if you do not specify a schema and just give the table name, it's going to pull the table that belongs to the scheme of that you long or the default schema, which is usually BBO. 22. Reducing Database Size With Data Compression: Section 12 Data compression Data Compression is a really awesome feature that was introduced in sequel Server 2008. This is not to be confused with backup compression. Backup compression refers to the compression of backup files, but sequel server creates when you're backing up a database. Data compression, on the other hand, prefers to compressing the data stored within the database itself. This is, however, an enterprise only feature. Data compression consists of two different compression types. Road compression in page compression. Remember that sequel server stores data in eight K pages. Withrow Compression Sequel Server will attempt to compress each road data before it writes to disk. It will only look at a single row when determining how best to compress the data page. Compression, on the other hand, looks at all the data on the page when determining how to compress it. Page compression includes road compression so page so row compression can actually be thought of as a subset of page compression. While reducing the size of the database on disk is nice, data compression also provides one really neat feature. Secret server cash is compressed data in memory in compressed form. This means sequel server can store more data in the same amount of memory. It also means less time spent reading and writing data from the disk because their future, there's fewer data being read or written. Data compression works on both tables and indexes. As with any good thing, there are some tradeoffs with compression. You're creating this space and memory space for computing power. Whenever sequel server has to work with compressed data, it must first decompress it, and that takes some work. However, most secret servers have plenty of CPU cycles available, and the bottleneck is usually at the disc Io level, so enabling compression usually results in performance increase. This is also an enterprise only feature, and because of this backups of data bases they used data compression can only be restored to another sequel server that is running Enterprise Edition. This is important to keep in mind when you're planning your disaster recovery scenarios. If you need to bring up a database that has been that has data compression enabled, you can only restore this to another sequel server. That's Enterprise Edition. Not all data compresses table that stores lobs or large binary objects typically don't compress well This is because internally sequel server stores the binary data not on the table data pages, but often their own storage area. As a result, the compression algorithm does not examine those only user tables can be compressed. System tables cannot be compressed. Heaps, which are tables without clustered indexes, requires special handling. If heaps or page compressed new data pages that are added, more data is added to the table are not compressed until the heap is rebuilt. Data bulk loaded into heaps are his only page compressed under certain situations and compressing a heap online is a single threaded process. These are just a few of the reasons you want to avoid heaps if possible. Lastly, compression is not inherited. This is probably the biggest complaint. People have a data compression sequel server, and hopefully it will be changed in the future. What does this mean? Well, it means a couple of things. First, you set up data compression at the object level, not the database level, so you have to tell sequel server to compress a particular table or index. You can't simply tell sequel server that you want all tables within a database. Compressed second indexes do not inherit the compression setting of the table they belong to. So if you compress the clustered index on a table than any non clustered, indexes on the table are not automatically compressed. You must also compress those manually. If any new tables are added to the database. After you've completed the initial data compression process, they will not be compressed. If you have a database where tables are being added on a regular basis, you miss occasionally scan for those new tables and manually compress them. Now, despite all those caveats in there, handful of other ones in the hand I mentioned data compression is really a great feature. I've used it on several large databases, and I've seen cases of up to a 75% reduction in database size and a corresponding increase in performance. Implementing data compression is a process that can take a while large databases, and it generally consists of several steps. First, you must compress the database. Then, because sequel server does not automatically return the empty database, this space back to the operating system usually want to shrink the database to reclaim some of that space. But as we've seen previously, shrinking the database causes fragmentation, so then you need to defrag the database so this whole process can take several days, depending on the size of your database and the speed of your disk drives. In Section 14 I'll give you a routine that you can use to compress your databases that runs a set number of hours overnight, and you can run it over multiple days to get everything compressed. 23. High Availability Solutions: Clustering: sequel server provides a couple of different options for creating highly available databases. These solutions help keep your data bases accessible in the amended. The hardware or software failure in this section on go over the different methods in the advantages and drawbacks of each type. Each of these high availability options are complex topics in their own right, and each has its own distinctive vocabulary to refer to the various parts of the system. This section is meant simply to provide you with an idea of the capabilities and basic architecture of each method. It's not meant to be a comprehensive guide to configuring, troubleshooting or managing these options. So the five high availability option sequel server offers are clustering replication, log shipping, database mirroring and always on clustering, also called fail over clustering, is perhaps the oldest high availability solution, and the one that most network administrators are probably familiar with. This set up comes in two flavors. Active, passive, an active, active, inactive, passive set up. They're typically two or more servers called nodes, that share set of network drives. Only one note is active at a time, meaning only one servers handling requests for data. The other server is a hot standby server that will take over automatically if the first server dies. For some reason, the switching over is handled by the Microsoft Cluster Service, which groups all the nose into one cluster and is responsible for directing traffic to the currently active note applications. Access the cluster by using a virtual machine name rather than the name of a specific machine. When I fail over occurs, the applications using the database do not notice anything has changed. They still use the virtual machine name in their connection request, but the Microsoft Cluster Service rocks lives Requests to the newly active note in an active, active set up. They're still two or more notes, but each note is running its own separate instance of sequel server. If one note feels that noticed switched over to run on the other active note, resulting in that note suddenly having to handle the load off to sequel server instances because there's a possibility that too, is Mrs May end up running on one machine. It's important that the two sequel server instances be configured to use their own set of drive letters. The benefits of an active, active excuse me of an active passive configuration are provides better performance than an active active configuration. If one notes fail, you never end up with two instances of sequel server running on a single machine. You have automatic fail over in the event of a hardware failure, and updates can be done on the non active load. My users are still using the active note. The disadvantages of an active passive configuration are it's a complex set up. The servers must be configured identically and have identical hardware. Second, there's a high cost. You have to pay for a second server, which is rarely used. It's just sitting there, idle as a standby. And finally, maintenance can be much more complex now the benefits of an active, active configuration. Far there's no waste of money, all of the servers and the cluster of being used. The drawbacks are if a fail over occurs, you're more than likely going to suffer some performance degradation because suddenly one server has to handle the load of two servers. One of the instances of sequel server must be a named instance. I have seen some applications that do not support named instances, so this could be a problem. And finally, this is not load balancing. A common misconception is that an active, active cluster balances the data base load across multiple servers. This is incorrect. An active, active cluster simply means they're multiple distinct sequel servers running and finally there some disadvantages that are common to both types of clustering. Because clustering uses Microsoft cluster services in order to be supported by Microsoft, the hardware must be on Microsoft's hardware compatibility list for clustered servers. Clustering is only available in enterprise and Developer editions Sequel server. It is available in Standard Edition, but there are some restrictions and finally, clustering works at the server level, not the database left. 24. High Availability Solutions: Replication: another high availability solution. It's called replication. In this configuration, a publisher and subscriber model is used. The primary server is referred to as the publisher and the backup servers. I wish I could be one or more or refer to asubscribers. There are three types of replication. Snapshot, merge and transaction snapshots. Replication is very similar to restoring a backup. Basically, you take a snapshot of the complete database and send that out to the subscriber's advantages. Are is very simple. Didn't figure and straight forward to maintain the disadvantages are for large databases. It can take a long time to push your snapshot out to the backup servers network. Layton C. Can be an issue. Data can only be changed at the publisher. Merge. Replication allows for Gaeta changes to be made at the subscriber's as well as out the publisher. All the changes emerged. Ride a single consistent copy of the database at all locations. Advantages of this are where can be performed at any of the servers involved in replication . Disadvantages are conflicts might occur when the same data has changed in multiple places and you'll need to have a procedure in place for determining how to resolve those conflicts . Changes attract using triggers, which may reduce performance, and each row that has changed must be uniquely identifiable. This means that every table that is published must have a column with the data type unique . Identify with the road. Good. Call it road. Good call properties that of a suitable column is not present in the table. Sequel server will add that column to each table. There's some Layton see, as changes get sent out to the various servers must initially be configured with the database snapshot, and this could take a while. For some large databases, the last type is transactional replication, and this provides the lowest Layton see of the three replication methods and is a method most people who use replication as a high availability solution use instead of triggers. Transactional replication track stated changes by reading the transaction log. The log records are distributed to the subscriber servers even continuously, which provides the lowest Layton Sea or, on a periodic basis, advantages of this method performance of the publisher, a civically not affected. Changes are tracked via the transaction log, so there's no overhead of triggers. The disadvantages are it's more complicated to set up transaction replication uses publishers, subscribers and a distributor, and the super could be the same machine as a publisher. It requires an initial backup to be set up. Excuse me. Initial snapshot to be set up. Changes can only be made at the publisher, and transaction logs cannot be cleared until the changes have been propagated. Toe all subscribers so your transaction logs could grow out of control of replication fails . For some reason, all versions of replication are available in all versions of sequel server except Express Replication is configured at the individual table level, and you can apply filters so only certain data is replicated. The subscriber copies of the databases are online and can be useful reporting or other functions. And for merger application, of course, the databases can be updated as well. Subscribers. The disadvantages in my experience replication tends to get out of sync fairly. Often re sinking acquires a new snapshot generated and pushed out to subscribers, which can be time consuming. If the publisher fails, there's no automatic fail over. He was manually reconfigure applications to point to the new backup server 25. High Availability Solutions: Log Shipping: log shipping is also another very commonly used high availability solution. Unlike replication, Log Shipping operates at the database level. Lock Shipping works by maintaining a standby database on another server. Standby databases created initially by performing a backup restore while leaving the database in a recovering state. Thin transaction. Long backups from primary database are repeatedly applied to the standby database at a regular interval, which can be determined by the DB A. Based on the businesses are Pio. Transaction logs on the primary database are backed up. A copy to the standby server and restored on the standby server by schedule jobs that run on both servers. A single database can be log shipped to multiple servers from the advantages of lock shipping, it's fairly easy to configure and pretty straight forward to manage. It is possible to protect against inadvertent data, deletions or changes because you control how long it takes before any changes are sent out to the standby database. If data was accidentally deleted, even stopped lock shipping, bring the standby database online in retreated. The unchanged version of the data log shipping is supported in all editions of sequel server, Higher than express, some disadvantages again. There's no automatic fail. If the primary server crashes, the secondary server must be banned manually brought online by restoring the last unrestored log backups, and the clients must be redirected to point to the new server. The secondary copy of the database is not available for use while log shipping is running. 26. High Availability Solutions: Mirroring: database. Mirroring is a new technology that was introduced in sequel Server 2005. It is implemented at the database. Look With Database Mary. There are two sequel servers involved and for disaster recovery reasons, they usually located in physically separate locations. Discussion of the servers involved in smearing can become confusing because the rules of the server can switch the sequel server instance that accepts requests from clients and users and returns data is referred to as the principal server. The server that maintains a warm standby copy of the database is referred to as the mirror server. When a fail over occurs, the old principle server becomes the new mirror server, and the old mirror server becomes the new principal server in database, mirroring. That can only be one principle server and one mirror server. In order to use memory mirror, databases must be using full recovery. Mirian can operate in two different modes. Hi safety mode or high performance mode in high safety mode. The two copies of the database are synchronized and no transactions are reported back to the client application as committed or finalized until they have been committed on both servers. This means the date is guaranteed to have made it to the mirror database, but this may come at the cost of increased delay at the client. This is the safest mode because the two copies of the database are always in sync, hence the name high safety mode. It is also referred to as synchronous mode again because the two database copies are always in sync in high performance mode, transactions are sent to the mirror database and immediately committed on the principle database. There is no waiting for confirmation from the mirror database that the transaction has committed there. This provides a faster response to the client, but at the cost of increased Layton See at the mirror database. If the mirror server is under a heavy load or the principal server is Cindy transactions faster than the mirror can commit them? It is possible that the mirror copy of the database can become slightly out of sync with the principal copy because the principal database returns a committed confirmation to the client faster. This is called the high performance mode. It is also referred to as asynchronous mode because the two database copies may not always be in sync. similar to replication, mirroring works by sending its transaction to the mirror server. But in replication, the actual sequel commands are sent to the backup server, where they're executed by that server against the database backup with mirroring the actual transaction Log records Air Scent, where they're restored against the backup database. Copy. Think of the difference this way. If I run a command that updates one million records with replication, just the command is sent to the backup server, where that command isn't executed by that server with mirroring the actual transaction, log of the one million updates are sent to the backup server, where they're replayed against the backup copy of the database. Starting with Secret 2008 The Log Stream, a sequel server uses for mirroring, is compressed for faster transmission to the mirror server. However, operations that modify large amounts of data, such as index maintenance commands may still take a while to propagate to the mirror. Failing over from the principle to the mirror can be done manually or automatically. Both of these options require mirroring to be operating in the high safety mode. There is another fail of remote called four spell over which can be done with mirroring running in the high performance mode. However, a force fail over may introduce data loss. Because we call the two databases may not always be in sync for automatic fail over and high safety mode. 1/3 server is required. This server is referred to as the monitoring server. The monitoring server does not store any data. Rather, its sole job is to monitor both the principal and the Mirror server to ensure they're still running. If the monitoring server detects of the principal server has gone down, will automatically fail over operations to the old Mirror server, which now becomes the new principal sir. Database mirroring is supported by both Standard and Enterprise editions of Sequel Server. With a few caveats. The servers involved in mirroring must be running the same addition of sequel server. The high availability mode is only available in enterprise editions. Witness servers, if they're used, can be any addition of sequel server, including express. So the advantage of database mirroring it's fairly easy to set up. There's a wizard that will walk you through everything, and SS and Mass provides some nice monitoring tools. Failing over can be automatic or can be done manually with a single click of a mouse. There's no need to work with clusters. The mirror coffee of the database is available for read only operations. If database corruption is detected at the principal database, that data can automatically be recovered from a mirror database. Some disadvantages for applications to correctly get redirected to the mirror server after fail over. No need to use a sequel, native plant or sequel. Native Client 10 BBC driver As these support the definition of a mirror server to connect to fail over occurs, Loggins must be kept in sync between the two servers. Databases cannot be failed over in groups, so if you have an application that uses multiple databases at once, you will need to devise a way to fail over all the data bases, if one fails over marrying, is available only in enterprise and standard additions from monitoring. Monitoring server can be any addition. That said, my experience with database mirroring has been very positive, and if you can live with these disadvantages, it's worth looking into. Unfortunately, database. Mary has been deprecating and will be removed in a future version of Sequel server. As a replacement you should use always on, which is what I talk about in the next module 27. High Availability Solutions: AlwaysOn: always on is the latest high availability option from Microsoft and was introduced with sequel server 2012. A simple way to think of it is clustering, combined with database mirroring all jacked up on steroids. Always on consists of two parts, always on fail over cluster instances and always on availability groups and always on fail Over Cluster is very similar to an active passive sequel cluster that we talked about earlier and always on Availability Group is a set of one or more databases that could be filled over as a group. From one note to another. They're gonna be one primary copy of the database and up to four secondary copies. However, each secondary copy must reside on a different note of always on Fail over Cluster. The availability modes support both synchronous and asynchronous modes, just like the high safety and high availability most of Mary and always on supports. Both automatic and manual fail over some advantages. All secondary rep. Because I available for read only operations, groups of databases can be failed over together, unlike with database mirroring and again that data corruption is detected in the primary replica. The data can be automatically recovered from a secondary disadvantage is complexity. Like all things, the more options you have and the more powerful tools get more complex things become. That's it. There are wizards that I'll guide you through the Senate process. 28. Suggested Maintenance Jobs: Backups: just like you occasionally have to put gas in your car and change the oil. There are also some things your duty a sequel server in order to keep it running and to keep it running smoothly, I'll go over some of these and give you some code to use to reform. These tasks. Also walk you through, using the database maintenance wizard to set some of these up Before we get too deep into these. I'd like to make a suggestion. It's very useful to create a database for your own use on each of your servers. This gives you a place to store any custom tables or store procedures you come up with. It's generally not a good idea to store user objects and system databases, such as a misty beer master, although some people do. If you have your own little works based on each server, you have the spot to start all your maintenance objects, so they're out of everyone's way. Personally, I have a database. All my servers, called Deviate Utilities and you see, had made one here. TBH details The most important thing any TV a must do is make sure that database backups are being made, but good deviates will go one step further. We'll also make sure those backups are good. The old saying is you never have a good backup until you have it restored. The best test is to restore a backup file to a test server. If you have a few databases, you could probably get by with restoring each back up to a test server on a regular basis as part of your background task that you do. But this method won't scale. When you start to have a larger number of databases and a larger number of servers, it becomes impossible to do. In fact, this is what I wrote a chapter about in the book Tribal Sequel. How do you statistical sample to determine the member minimum number of database backups you need to restore in order to say, with a certain confidence level that all your backups are valid, but I'm getting ahead of myself. As you mentioned before. There are three types of backups full differential and transaction log backups, which he used, and the frequency of each depends on your company's R p O recovery point objectives, or how much data you're willing to lose and your company's r t o recovery time objective. How long you're willing to spend bringing the system back online? Milwaukee three. Using the database maintenance was to create these backups. So if you go into management, I mean this plans, right? Click on that. And she's maintenance plan lizards. And the first thing we'll do is make a wizard make a plan that will do full backups. So ask for name. We'll call this full backup. It's gonna run into the sequel server agent account because that's the countless actually be doing backups. And we're only gonna have one task in this, and that's gonna be back of the databases. So here's a single schedule for the entire plan. The schedule. I'm going to set this to run, and it defaults. The name of the schedule defaults to the full backup. They like to change that to the name of when it is, and I'm gonna have these run Sunday at 1 a.m. So it's gonna be returning recurring schedule , having every Sunday at 1 a.m. Keep all these other defaults, then we just click next, and I was gonna ask us what we're gonna do we want to do a full backup of the data base as the order of the way we want? The task to execute in in this case is only one task. So there's really no ordering that we need to do here. And finally. Now we have to tell the task which databases we want to back up. You can back up only system databases. All the user data bases which are databases excluding master model MST be You're tempted be or only a specific databases in this case because it's full backups. I'm going to select all databases. I'm also gonna include this check box store data ignored databases where the state is not online. You can set a database toe offline, and I often do this when we've moved a database to another server or we're trying to shut down database or shut down an application that we think uses this database. But we're not sure if there's anything else that uses it or not. We put an offline mode and the databases so there and then this stuff stops, starts failing, rather than have to restore a copy of that database. All we have to do is one or two mouse clicks and databases back online. So I run into situations fairly frequently where there might be databases on a server that are offline. If you do not check this check box there in this job runs and hits that serving this offline, the job's gonna fail and say the servers offline. I can't back it up. But if you include this, it'll just skip that we're going to back up to disk, create a backup file for every database, and this is handy. Create a subdirectory for each database. So now what you'll see when this job runs. This is the default path, with backups are found and then for the master database. It will include a subdirectory slash master Slash and then all the backup files from Astro being there. And then for MST be it'll be back up slash mst be on. All the files will be stored in there. You also always want to check this box of verified back of integrity without what else sequel server to do is after it's done writing out the backup final to the database, it's gonna read it all back and make sure there are no errors that crept in when it was writing that file to disk. That's just part of the process of making sure your backups are valid and back of compression. If you're using Enterprise Edition, Hey said back a compression in this case will use the default server setting. I usually turn on back of compression at the server level, so in this case, everything would be backed up. You click next, you can have it for write a report to a text file, which is always handy because sometimes when the drop fails, you might not know why it fails in the log. Will give you some place to go to check and find out why it fails through there and we're finished. Go through and create this plan. And now, if want a look at what it actually did, you can see under here and there. Main this plans it made it made a new entry here. Full backups. If we double click this and look on this, we can go in here and actually see what the wizard did here. We can see he made full backup all databases being frankly going here and edit, and that again brings up the same screen we saw when we were setting this up. If we made a mistake in the Wizard, we can actually change it in here. You notice one of the things that's a little bit annoying about the Wizard is it comes up here and it just calls this sub plan one, which is basically means this this command. And here is a sub plan you're going to see when we go into sequel agent and we look at our list of jobs. Here's a full backup plan made, and we can see for backup. Starts up plan one. I personally don't like that. But there's really if you can live with, that's no big deal, okay, We're looking steps. We can see the name the first step again. Sub plan one. And if we click at it, look at that. You can see what was actually does is it runs. It's what the meanest plan actually isn't is an SS. I asked package and it runs this s s s package. You see, the type of this step is run secret server integration services package. This is a package name, and we can look at other information on these tabs if we had to. But there's really nothing there. You need to change. You look a schedule we can see. Here's my name. I put in for the schedule and begin opening up and see Sunday when I am. That's a schedule we define if we wanted to, and I recommend doing this for when the job fails, write to the Windows Application event log. You can also set up send an email when the job fails. I don't have any database operators set up to send email to, but if you did, have those set up would appear here. And you can have that person getting email when the job fails. And then if you actually want to see the job, run and open the job activity monitor. Frankly, that and do you start job and what this is doing is actually going through and backing up all our databases. Now you can run it. This is running manually. If you want, just leave it on. The schedule would run Sunday. 1 a.m. So this job creating the full backup. If you wanted to create a differential, backup or transaction log backups, you basically do the same thing except when you're stepping through the Wizard On that second step there, we specify where we specified full backup. You instead specify differential backup or a transaction log back up. And when the things you want to keep in mind is, wanna have a good schedule? So we've set this schedule up to take full backups Sunday at 1 a.m. Every week. Well, you might want to do then is when you said your differentials have differential backups be taken Monday through Saturday at 1 a.m. And then you transactional backups could be taken depending on your on your needs. Every our maybe every day, okay? And so the back of has been done. If you want to go verify that those been have been done, we can go bring up Windows Explorer and we can see in backup directory. We've made one folder for each database Familes folder. We can see the backup. The backup is named database name Underscore backup underscore Year, month, day, and the time because he has been made for all all of our databases. If you want to find the log file that the back of made in the long drive. Three we can see here. Back up sub plan. Look, long file, and we see it log all the commands and actually sent here is the actual TC command that does the backup. And then this is command. Where does the verify back has been done? It verifies it. And this was for the master database Is for Model MST be adventure Works and our utility database. 29. Suggested Maintenance Jobs: Advanced Backup Methods: So we've made a backup job that will take a full back of polar databases. There's one thing to note. In order to make a differential or transaction, log back up. You have to first make a full backup. If that has not been made in the differential or log, backup will fail and overturning air, saying no current database backup exists. And if you call, this is because the differential backup is based on the previous full backup. Same with transaction locks. So in order to take a different transaction log, you need to have a full backup that has been made in order to base those on. Normally, this isn't a problem. Indeed, today uses. But if you have a user or users that create new databases on the fly, perhaps on test systems or development systems, your backups may start failing because they've created databases that you don't know about . Maybe they created a database on Monday. Your full backups only happen on Sunday. So then Monday night your differential backups gonna fail because going into the new database and say there's no current back up for this. Unfortunately, the Microsoft Maintenance plan wizard does not provide you with a way to say, Hey, I need your differential This But there's no full backup. Let me take a full backup instead. You can't do that. Um, but there is a solution out there, and the solution is actually written by a a man named Allah Holland Grand, who was a sequel Server deviate in Sweden. And he has written several scripts, actually, that are really fantastic. And, um, his main ones here. He has scripts for backups, integrity, check, index maintenance. But if you look at his backup script, one of the things is backup script allows you to do is say, hey, if I want to take a differential backup and there's no full backup, go ahead and make a full back of instead. And I'm just gonna point you to This is euros here. I will include a link to this website in the supplementary files for this module. But I'll just let you know you can use this job and it will. Actually, you won't be using the database maintenance wizard anymore for your backup. She'll be using this, but this will take care of all those type of little issues that can become real headaches. Once you start getting things all set up 30. Suggested Maintenance Jobs: Backup History Maintenance: the next maintenance job I want to show you how to create is maintenance for our backup history log. If you remember in Section eight, I told you sequel server creates a lot of record each time a database backup is made or restores made. And unless you said of a purge process, these logs accumulate and the increased the size of the MST be database. In section eight. I showed you a way to performer one time deletion of that history. And if you call, if that was done by right clicking of sequel agent doing properties, going history and selecting this move agents older than a certain number of time, so that will do it one time. That's just a one time, really. If you want to set up something to do this on a regular basis, you can actually make a maintenance plan that will do that. So again we'll go through the maintenance plan wizard and here will say, back of history maintenance again. We're going to do this on the single schedule. I like to do this. I run this mine mind weekly and again that they will do this Sunday, uh, at 4 a.m. and we're not gonna delete everything. Sundays at 4 a.m. But we'll delete some stuff. So this option here is called clean up history. You can see down here. It tells you this task would leave historical data about backups and restores, So we'll click next and again. We only have one task. So we're not gonna do ordering here. Here we see what kind of information we wanted to leave. So known in addition to sequel logging all a backup and restores that are dying. Sequel also keeps logs about the job history and the maintenance plan. So all the new things that we're making now are also going to create log entries. And so here, a default win. Say, believe anything is older than any of these three things that are older than four weeks. And if you want to keep more or less, you can just that has appropriate. You can choose to days, hours, months, years, whatever you want. And again, right, report to log, file and finish. And it will. It's made the plan. We can see the I mean, this plan has been appear. We could see there's a new back of history job that has been made, and we see it's covered Iran every Sunday at 4 a.m. Now this creates the maintenance plan. There is a way to do this on your own. If you don't want to do maintenance plan, and I'll show you how to do that. And again, this code will be included in the supplementary material for this module. This is all the script is. It's very simple. You just specify were saying Use MST be because that's the database with the logs are stored and here was sort of variable for the date in here. We say in this case, the script it's still eating date older than two months. Our script, our main, this plan to things older than a week, four weeks. So he said that to say, if the oldest date we're set the oldest date to be four weeks prior to the current date and then it just executes the store procedure passing in this parameter, which is four weeks over here and that's all it does, it just runs. If you want to set this up on your own manually as a job, this is very simple to do to you Just right click. She's new Jobs in this case will say history, maintenance. And if you want to put in the description, you can't. But the steps are first step, and this is simply going to be a transact sequel. Command in the model in the MST be database and really don't need to change is because we're changing it here, but it's nice to put it there, too. And then this job will just run this command. If you want to make a log file when you're setting up job manually on the advance tab, you put a path in here and you could put in a path to a path the name to where you want the log file stored. If you do not check this box, it will keep it will override that law file every time the job runs. If you do, check this box will keep depending to the log file so your log file grow. If you do, you do that. Be careful because you can run out of this space if you don't purge this log file periodically. Three other thing you probably want include is this includes step history now put. So what that means is when you run this job, you know this When I ran this command here, it included this Outlook Command completed successfully. That's what most of these maintenance jobs a return some of them will act returned more data, the return rose or some other information. Anything that you would normally see in the message or results pain in in a single server management studio. If you include this includes step out with history that will also get included into the log file. So you'll most likely want to check that as well. And when Jennings I'm not actually specifying lock and then again schedule. You can set up a new schedule or we already have our Sunday schedule 4 a.m. Set up. You just pick that So jobs, multiple jobs can share the same schedule. What will happen is a 4 a.m. Both these jobs will kick off, which in this case is not a good thing, cause both the job doing the same thing. But just to show you, um, how we can do this and again, the notifications. You set this up. So now, man, instead of a job this was the job that our maintenance plan made. And this is the one that we made. They do exactly the same thing. Okay? And I will delete someone that we made. That's good. So again, this code will be in the supplementary material for this chapter. 31. Suggested Maintenance Jobs: Index And Statistics Maintenance: If you recall from some of our previous sections, sequel server uses indexes, and over time those indexes can become fragmented, and that will lead to poor performance. So you should really have some sort of index maintenance routine that will defragment those indexes on a regular basis. When it comes the indexes, there are two options to defragment. You can reorganize the index or rebuild. The index Reorganizing index is the less expensive of the two operations and only D fragments the top level pages of the index. Because of this, it's also the faster of the two options. Rebuilding an index actually drops or deletes the index and recreates it so all pages of the index's air rebuilt. As you might expect, this is more processor intensive and on large tables. It could take a fair amount of time. And finally, sequel server maintains statistics not only on indexes but also on columns of table data. Sequel server automatically update these statistics if you've configured that way with the update statistics, a synchronously option that we talked about in the database options model. But in order for that automatic update process to be triggered at least 20% of the data has to have changed. If you have a large table with millions arose that 20% threshold might be too large to reach before you got to notice a performance degradation. So you may want to manually update the statistics so I'll go through. We'll look at how we can use a maintenance plan to take care of these options through these problems, so we'll call this one index and will first go through the index. We'll do the reorg first, and they will do this nightly every night at three AM Now, one thing to consider when you're setting up these jobs, keep in mind when you had your backup jobs being run because you may want to, they can run in the same kind of your problem. Things will just slow down, but you may want to have your your index DeFrank occur before the databases backed up. Well, you may want it to occur after the databases backed up, so in this becomes because you can't really tell sequel server only run this job for a certain amount of time. You don't really have a way to determine how long your index rebuild a reorg I was gonna take. So if you want to do it before, it's kind of a trial and error process to run this job a couple times say, Oh, this normally takes a 30 minutes and so set my backup job to run maybe an hour after this one, just to give a some leeway there. Okay, here. I'll say that for 1 a.m. Excuse me. 3 a.m. And actually, I'm not going to do this nightly. I'm going to do it Monday through Saturday for the reorg in this case when she was reorganized. Index, remember, this is the quicker, less thorough option of reorganizing the index. Now, when you go through here again, we have the option to select all the databases just to user data bases or some of mean out here when you choose respect Lea's databases. If I were to pick these two and leave this maintenance plan, as is then, if someone comes and adds a new database to the server, it's not gonna get picked up in here because I specifically said on Lee, do these two databases. If, however, you select all user data bases for all databases than any new databases make, it added, will automatically get picked up by this job. I like to use all databases. Master model typically are gonna have anything that needs to be any indexes that need to be rebuilt because models empty and master there's David is a change in their very much. However, your MST be database probably will have some items that can be updated. Because, remember, that's where all the sequel logging goes. So there is data has written to that, usually on a daily basis. You probably want to include that as well. And again, we talked about this check box previously in war offline databases. If you did actually pick just one specific database, you can actually make a maintenance plan that will only be fragment one particular table. So if you had if you have one table that you know just it's really heavily used and it gets re fragment it fairly often, you might want to, uh, defragment that more frequently in another tables. You can do that using the wizard as well again compact LA, but large objects. You probably want to uncheck this because again, um, what this will do, it will try to reorganize your data, move all the empty space to the into the file and shrink things. It creates a lot of extra work, probably on the reorg process. You probably won't need to be doing that. And again we can write, therefore to a file. And now we see we've made the new index re our plan and see the job has been made. No, again, we can go through the same thing, and this time would do Index rebuild and we call The rebuild is the more time consuming Mawr thorough method of defragment. Index in this case will have this run Sunday at five AM because remember, our other are reorg Drop brand Monday through Saturday. So are rebuilt will run on Sunday and then we'll choose the Rebuild Index here on again. We have the same options. Actually, we have a little bit more options here. The default free space free space options here, This tells you. Remember in the module I talked about indexes, how you could leave some free space on your index pages to accommodate new data that's inserted or data that might get updated on a rebuild. You can actually change the amount. Let's free on each page. And if you use the default, remember, we could set the default at the server level and at the database level for default free pages for indexes. Or you can actually set a specific percentage if you're doing again, like the one table that you know needs to have more free space. Um, see, you said that there. If you have Enterprise Edition, you can choose. Keep index online for rebuilding and what that will do normally win in index in standard and lower. When you're rebuilding an index, it will be an offline mode sequel will drop the index and rebuild it. Which means if you run this while people are using the system, that index will not be available to them because it's offline has been dropped, being rebuilt. And so any queries that they might be running that might normally use this index will not be able to use its It's not available. Enterprise Edition, however, has a feature we can rebuild indexes online. Answer that way. If you're running this during normal business hours, that index can still be used wise being rebuilt now. There are some exceptions to that, and that's what the's next to radio buttons. Therefore, some indexes. They can't be rebuilt online, depending on the data types that are in them. It just can't be done. Islam. I know operations. So this lets you tell sequel server. Hey, for those ones that can't be built online, you want me to just not do anything with them or to do him in an offline and sort results Attempted B. This is just kind of a workspace you're saying you want Sequel server T Is temp TB a scratch pad? Or do you want to use the actual database that it's working in? Has a scratch pad? A lot of times, you could just keep this in temp TB. If you have tempted beyond a separate set of drives that could improve performance, although if you really busy system, there's a lot of tempting be contention. If you got a lot of things you need to empty be this could actually slow it down. So again, it's kind of ah ah, setting. It depends on your environment. Click next and we'll see. This is made. Another meet its plan rebuild and there's a job and there's the rebuild meeting this plan and again, the last one that we talked about here is statistics and say we'll do this one. Now This is something that a lot of people don't know Forget. But when you rebuild indexes on a table, all the statistics for that index automatically get updated. So I've seen some maintenance plans that will rebuild an index and then go through an update. The statistics on that index. While you're doing duplicate work there, there's no need to do that. So our rebuild job was running a Sundays. Five will do the update Statistics again just Monday through Saturday and we'll do this awesome at five. Now the Reorg does not update statistics on Lee. The rebuild does that and see here update statistics and again will choose call databases. Now, this is an option here when you re organize indexes. I mean, when you update statistics, you can specify. Do I want to re calculate these statistics by looking at every single piece of data in this table? And that's what someone is. A full scan, Or do you want to say I want to look at this case? Only 50% of the data 5% or rose here. And so that will be done faster. But you're not gonna get as accurate statistics by default if you don't specify anything. If you were doing this not through the maintenance wizard, but just through the ts Go command by default sequel only. Look at about 30% of the data, so well, you need to keep in mind. Now, remember, when you rebuild an index, all the statistics on that index are updated using a full scan, because when he rebuild the index, it looks at every single piece of data in that index. So if you actually have a job that rebuilds the index and then goes through an update statistics using only a 30% sampling rate, you're actually gonna make your statistics worse because the index rebuild updated the statistics, looking at every single piece of data. But now you're going through and say update. I'm looking at only every third piece of data, so that's something to keep in mind. And I see this mistake made fairly often, but again. So this is during we're doing this Monday through Saturday. Eso well, just look at the 30% and Then on Sunday's we'll get our full rebuild again right to a lot of files. And it's done. No, this the's maintenance jobs, index maintenance jobs are not the most optimal. They're better than nothing, Which is why I'm showing you these, Um, but just make some assumptions and the really they're not very intelligent. For example, Microsoft recommends that you reorganize indexes when the fragmentation level is between 5% and 30% and you rebuild the index. They're fragmentation is greater than 30%. So when we were going through making these maintenance plans, did you notice we never put in any any thresholds in there? We never said do this of its 5% fragmented or do this of its 30% fragment. So the default maintenance plan wizard just goes through and does everything. Whether the table needs to be re indexed or not, it's gonna go through and do it, and it's not gonna care. You can have an index that's 90% fragmented, but if it's on the Thursday and your that's the date for the reorg job to go, it's only gonna get reorganized. It's not gonna get rebuilt so in Likewise, if you got a bunch of indexes that are not fragmented at all. These maintenance plans don't care. They're gonna go through and either rebuild a reorganize index anyway. So this is a big waste of processing power takes a lot of extra time, and it can generate times of transaction log entries and that can have a negative impact on disk management, log shipping, mirroring and all those other things that used the transaction. So again, I'm showing you this the maintenance plan, because it's better than nothing. And if you just have to go through and get something set up right away, this is one way to do it. Um, or elegant way is again all a hologram script. His index maintenance index and statistics maintenance plan is very, very nice. What you can do is schedule this job to run every day wherever you want, and you can actually set in values and set in thresholds. When do you want to rebuild? When do you want Teoh reorg? Do you want to do it online or offline? You can tell it. How did all these things? Here is the different fragmentation levels we can set, and it's It's very nineties. It's very great, and it will handle not only read re rebuilding and reorganizing your indexes, it will also update statistics as well. And it's smart enough to know Hey, if I've rebuild this index, I don't need to update statistics on it, and this is very nice, Cuban said. The different threshold levels. So again the link is here. I will include a link to this in the supplementary files for this chapter for this section . But at the very least, use the maintenance plan was just to get some kind of meeting that's going. 32. Suggested Maintenance Jobs: Cycle Error Logs: one of the other types of maintenance you want. Todo involves cycling the error logs for both sequel server and sequel Server agent. If you recall a couple sections ago I sold you that both sequel server and sickle agent maintain error logs. Normally, these logs air cycled when the servers restarted or rebooted. You may also want to do this. If you have servers that very rarely get rebooted, you might wanna have this run on a regular basis. Unfortunately, the maintenance plan wizard does not have an option to do this. It doesn't provide this functionality. So I've got some code here that I'll show you how you can do this and we'll just make a regular job out of this. You can see this first command here will cycle the main sequel server logs, and the 2nd 1 will cycle the air log and we can see before Romney's will take a look management sequel server Long you can see this law was started. This is a sequel server log. It was created at 1 46 PM on 27 if you look at the sequel agent log, you see this one. It started at 7 27 at 10 29 AM so I'm going to execute both these now and these locks cycle . Now. If I look in refreshes law, we can see the sequel Server log. It's been cycled. It's now through 5 p.m. And similarly, if we go and look at the sequel Agent Log, we can see this is also cycled at the same time. So this is useful. I like to schedule my jobs. Do this on the first of every month on servers that very, really get rebooted. This just helps manage the size of this log, and if something goes wrong, it's less data they have to scroll through to find what you're looking for. And again, I walk you through. Setting this up as a job was called this. It's like layer logs. In the first step, I called a Step one and against T sequel script paces in here. We don't really need to set the database because I'm setting the database manually within the script itself. This is a very simple command, so there's really no need for a log file on this, either. It doesn't really producing the out, but my schedule now is gonna be monthly. You see, it's first at, say, 3 a.m. So I said, That's right monthly the first of every month at 3 a.m. And again I can set up my notifications and there's my job, so that's really a very simple one. 33. Suggested Maintenance Jobs: Database Mail Maintenance: If you have enabled database mail on your servers, you should realize that sequel server stores a copy of the email it sends, and it also has a log of every attempt it makes to send email. So you want to purchase periodically again to prevent your MST me database from growing. Now, you might have some sort of regular regulatory requirement to maintain copies of email that have been sent, so you might actually not want to do this. It depends on your situation again. This is one of those tasks that the maintenance plan wizard does not have an option for. So I'll give you some code and you can see it here. Basically, this is set up. It will delete log records and mail items that are older than 60 days. And you change this if you want to use a different number of days, change that here. But basically this just runs to store procedures. This 1st 1 delete mail items. This is the command that actually deletes the copies of the email that gets sent out in this last one delete log is the one that deletes the male log. So we're in this. We don't have anything I haven't had mail. Don't have male set up on here, but again, walking. If you're setting up this job, let's call this T v mail maintenance again. Our first step. There's a job again. That's one of those ones that there's really no output. There's no need to make a log file and we'll schedule this. Actually, we'll do this at the same time that we do our cycle air log so monthly on the 1st 3 a.m. and we'll set up notifications and now our job set. 34. Suggested Maintenance Jobs: DBCC CHECKDB: if you spend any amount of the time as a DB A at all, you're gonna come across a database that has corruption is just a fact of life. It's not a question of if, but of when corruption happens. Secret server provides a tool that check the validity of a database and to see if there is any corruption in there. In this command is D B, C C, and there are a whole bunch of different parameters you can specify in here. But the command that will cause everything in the database to be checked is de BCC Chek TV . And there's also another Klaus. You put on the end of that cold with no info messages so that only errors are reporting. And if you don't include that clause than all source of other instant information will be returned by the command, which is interesting. But if you're looking for errors, it's kind of noise, and it makes it very difficult to find actual errors there already. I know that some of these commands can take a long time to run, and they can be processor intensive. Sent sequel server 2005 D BCC commands have not caused blocking, meaning other activities can go on in the database while the IPCC command is running. But those commands may run slower simply because the VCC is in a processor intensive command. So you may want to perform your D BCC checks after hours. And even better option is to combine it with your test backup for stores. And to do this, you restore a copy of the database to a test server and then run D BCC against the restored copy on the test server. If there any corruptions found, you know, you need to dress those on a production copy. You can also try your repair on me test copy as well to see if their repairs work or if the repairs damage any data. Why you make those repairs So it's kind of a good way to do that. In that way, you basically offloaded you d BCC testing from the production server onto another server. So if we go through the maintenance plan wizard on, we set up the D. V. C C test. You call this Stevie CC and will run this I say weekly Sundays at 1:30 a.m. So Sunday will say 1 30 and check. Database integrity is the option here, and we'll do it in all databases. And this one you definitely want include your system databases. Because if the system databases get corrupted, you can be in a world of hurt. So if you want to check, you wanna if there any corruption in there you want, identify it as soon as possible. Also, de BCC can only look at tables or tables and indexes. So we want include indexes just because we're going to check everything in the database and then right, therefore data file this again is one of those commands that you do want to write to log file because it will provide useful information. And so that's our job. Now I'll show you how to the difference between the without the no info messages cause and with if we just run the command straight with the B C C check command, it's going to still, it's gonna perform all these checks and it's gonna return a whole bunch of information you can see here. We know right away there are no errors because we have got a green check mark down here and says it completed successfully. If there were errors, I would say this returned errors. But if we scroll down to the end, we can also see the final line here check to be found. Zero allocation errors and zero consistency errors. So it ran Fine. But also look at all this other information. In return, about 72,591 rose in production. That work order table on its 528 pages. Yeah, that's interesting. But if I'm looking for errors, all of this stuff doesn't matter to me. I don't wanna have to scan through all this stuff to see. Was there an error? Where did that air occur? So if he included the with no info messages because then when we run this, it will still do all the same checks. But it's undergo only going to output any errors that might have found. And in this case, there were no error. So just says it completed successfully, So that's a nice little put on your your command, and it will keep your log files down and make it easier to locate errors when you actually have 35. Suggested Maintenance Jobs: Data Compression Maintenance: I'm going to mention one more job briefly that you might want to implement. This job is related to data compression, so may not apply to you if you don't use data compression. If it's not available to you because you're not running Enterprise Edition, you probably will not need to use this job if you have a database that you use compression on and it gets new tables added to it on a regular basis. If you recall, I mentioned, those tables will not get compressed by default, so you'll need some sort of process to run and compress those new tables. If that's the case, you can use this job and run on a regular basis. Also, if you were just implementing data compression for the first time in a database, you can also just run this job and it will actually create that. It'll compress the database initially for you, and I'm not gonna go into detail about it here because I want to find everyone. But I did write in detail about it on my block and two posts. One of them is neighborly data compression sequel server. The other is updated version and the updated version. This script on the updated version page, actually is the one that's included within this course, Uh, in the supplementary files here. So this is really nice. I've been running this. No problem. You're gonna say when this job Only for a certain number of hours, every night, and we'll go through it and compress your database for you. Look for any new files and might airman new tables that might need to be compressed. 36. Troubleshooting: Finding The SQL Server Version Number: in section 15. I'm going to give you some basic troubleshooting tools. At some point, some users are gonna come to you with a problem, and you didn't need troubleshoot. So I'm going to give you some very basic steps here to help get you started. If none of these options help you or if you need additional assistance, I suggest checking online. Thes sequel Server community is very helpful. If you can't find a block post through Google or your search engine about whatever issue you have, you can try sending a message out on Twitter using the hashtag pound a sequel help. You'll probably get a response within a few minutes. The sequel, Twitter Community is just awesome, and they will get back to you very, very quickly in most cases. So the first thing we're going to need to do a lot of times, um, you might not know what version of sequel server you're running and you tell one way to check in and our city you can see sequel server, and it will give you the version number here 11.0 dot 33339 Yeah, What does that mean? Because most times you're gonna want to know what what version and service pack level on. In that case, a good way to find that out used the system variable version and at version and which is to have the results come back in tax. And then we run this. It'll tell you I'm sequel Server 2012 service Pack one. And this is my version number that we can see there now if there's one thing to note here, Microsoft puts out cumulative updates and security patches and all kinds of things, so there can actually be several different versions of service. Pack one. So how did you find out exactly what you've got? Well, you can take this version number here, and there's a great website, and I will include this in the again and supplementary files for this course. Secret Service security dot com maintains a list of all the version builds, and we can see my version here 11.0 dot 3339 go here for 2012 and I look at I millemann 0.0 dot 3339 So I am at sequel server 2012 SP one plus the patch that was released with this knowledge base article, which is a passionate fixes. It was installed problem where some programs kept running. So this is how you can tell if you have a specific version number. You can see exactly what level you're sequel server is that? 37. Troubleshooting: Why Can't I Log In?: Another common request that you might get is a user comes you and says, I can't log in. I don't know what's wrong. When sequel server rejects a log in request, the user is given just a generic failure message. This is by design to minimize the ability to hack into the server. However, sometimes user might not know why that logging was rejected. Us. Deviate can look into the sequel server log files to determine exactly why that log in fails. So that's good. Demo this and I'll show you exactly what I mean. Try and connect, Connect, and I will intentionally miss type. The password comes up and says, Log in fails. That's what I see as a user. It's pretty generic, you know, if I try, if I make up some totally random law again, that doesn't even exist on the server. It still says logging failed. So now the user comes us and says, Hey, I can't log in What's going on? Well, we can find out exactly why that logging was rejected by looking into the sequel server locked, so going to management in the logs, we can see logging failed for user s, a and the reason was the password did not match. That was the first logging attempt that I tried. Failed my second attempt by just typing garbage for the name this one says could not find a long and matching the name provided. So this lets you know exactly why that longing didn't happen. Now in Secret Server 2008 and later, this is really nice. It actually comes out and tells you the reason. In prior versions of Sequel Server, they weren't so kind. They simply listed a failed logging it. So you see an error message like this. 18 4 56 Severity 18 State eight or state five. So if you're running on a secret 2005 server where you don't have this nice text presented to you, you only have this error message. How do you translate that? How do you know what that actually means? Again? There's a great website that I'll give you the link for on sequel server central dot com, and this is called getting the Low down on 18 4 56 errors. And the key to figuring out what the problem is is the state number here we see state number eight for the past would fail in ST number five for the new matching longer. So we're looking here on this website. We can see state eight. That's password Mitch Man State five. That's an invalid user. I d. So this if you're on 2005 this is where you can go to find out what that means. No, that prior to sequel Server 2005 a failed log in would always give you a state of one, then makes troubleshooting very difficult. Basically, have no information at all. Um, so that's just something to keep in mind. If you're on version of sequel server prior to 2005 it's just another reason upgrade. You want to get off that older version anyway? 38. Troubleshooting: Why Is My Query Taking So Long?: one of the most common queries or questions that a DB aching and asked is, Why is my query taking so long? Four. Man, It seems like the databases running really slow today. This is one of those questions. It's really frustrating to answer because it lacks specific detail. I mean, how long is too long and as a db a your given very little information to work with. There are, however, a couple common issues that you can check on quickly, and I'll go over those now. The first issue is blocking, as we learned several seconds ago. Sequel server is a relational database, and part of that means it will prevent to operations for modifying the same piece of data at the same time. Secret does this by using locks. The Lock is an internal mechanism sequel uses to mark a record that is in the process of being changed. If a record has a lock on it, only the process that created the lock can change the data. Normally, locks are taken and released very quickly, and there aren't any issues. Problems arise, however, when either sequel server has to modify a large amounts of data and it takes a long time to do or when a developer has manually placed the lock on a table and has forgotten to remove it. And we'll use this last situation demonstrate how to troubleshoot this. Going to run some code here, and this code is included in the supplementary file for this section. I'm going to use the eventual works database. There is already a table in this database called Error Log. When I'm going to do is actually insert some data into it. And now when I'm going to do is create a blocking situation. I'm going to do this by explicitly creating a lock on the table by making a transaction. So I'm gonna run this transaction. I'm gonna update this table now notice I don't have a commit trends statement at the end of this, which would tell sequel server to release the lock. Make the change. So in this case that Locke is still being held. So now what I'm gonna do, I'm going to take this. I'm gonna go to another window, which is another connection to sequel server. Tell my original connection here is S P I D. Number 54. Another one is 55 to 2 different connections. Now, I'm gonna try and run this update this state and it's just sitting here not doing anything . And the reason is lock from this transaction is still there. So this will just sit here. Se is gonna cause blocking because this transaction has never been committed. Now there's a couple of ways you can see this if you if you think you still have a lot in place, you go into activity, monitor looking processes and you can see this column here is called head blocker. And if there's a SP I D that is causing blocking, there will be a one in this column and we can see this is what is causing blocking. My SPD is 54 which we know this is what it is down here 54. And we can see this one, which is my second query that still executing S p I. T. 55 is being blocked by this one. This is the blocked by column. You can see 54 is the one that's blocking this. So that's one way we can see this another way. We can actually run this query which looks at a system management view and this will tell us any blocking will see. You can see here key. This is a key lock because it's trying to update the primary key on the table database were on thing is the block session 55 as being blocked. Blocking session is 54. Which one is causing blocking? Wait tight. We'll talk about this in a little bit, but I'm waiting for basically an exclusive lock on this. Table 55 is waiting for the exclusive lock, and so far I've been waiting for 110 seconds. So then the question is OK, I know this thing's blocking. What do I do if is actually you doing the blocking like it is in this case, I could just cancel the command by hitting this break, but that will cancel this. Excuse me. This one I could hit like in Castle because this is the one I haven't committed. I'm actually going to do is just commit my open transaction. And that would right this change of complete this and then this one would complete the other thing. You do if you don't have access to the window or to the spit that actually is causing the walking You can use the kill command in that case would say kill 54 which do is they kill? And then the idea of the blocking session. If you do this, it will kill that you'll see immediately. This one stopped. This one went through because there's no longer any blocking. And if I run my command for locking block anymore, there's nothing there. Everything ran. Now I know that if I go back to here and trying to leave everything from this table just to return state it was in I get a transport error. But this basically means because remember, I waas this window was my SP i d 54. Meaning this is my connection. But I was the one I killed, so I no longer have this committee connection anymore, Which is why I have this transport air so I can just execute this again Sequel server. You'll see it's disconnected here because I killed the connection sequel server will, if I execute us again, automatically reconnect you see and back here in this case, I got the same SP i d number. That's just a coincidence. But you see it connected and I now deleted the table 39. Troubleshooting: What Is SQL Server Waiting On?: you may not have any blocking going on, but users might still be complaining that the system is running slow. To help troubleshoot this situation. Sequel server can provide you with a list of all the time in the spent waiting and what it was Rating for. This is provided via the system view sista at G. M. O s Weight Stats Note that the information presented in this view is not persisted. All the counters get reset and start over zero when sequel server is restarted or when the command manually reset them has been issued. Now the query I'm going to show you on use here was written by a gentleman named Glenn. Very. He's a secret server consultant in a sequel server expert. He writes of Log about sequel server performance, which is where he posts these queries. I highly recommend this site, and his blawg can be found at sequel server performance that wordpress dot com And here's the link here specifically for his diagnostic queries, which what we're gonna use it is also a link in the supplementary file for this module. So looking at this query, we can see they're a bunch of ones here that he filters out. And these because these are all system wait staffs that are just normal. Operation Sequel server. It just spends time waiting on stuff. That's how it works. That's outruns. These are all things that rarely, if ever, are issues, so we can ignore all those. So we'll run this. They don't take a little while to run and come back with her top wait staffs, and I don't have a whole lot going on in the system. The's first fear, actually, some new weight stats that were introduced in sequel Server 2012 Um, HDR. This is related to high availability and always on functionality. Dirty poll is flushing pages the disc. So here we can see. This is if you recall this was the lock type that we were encountering in the demo. In the last model, we can see, we waited for 222 seconds. This represents 0.34% of all the wait time on the system. So all the other weights you can see these are again. These are normal system weights, so they see a large percentage. That's normally only other wait time of interest. We have here is a six pack it way. This is. I'll go over what this is in the next little section here, but basically is involved with paralyzing queries. So some of the common weight types that you'll see in Sequel Server we have a sink network i o. This occurs when sequel server is right waiting to write to the network, either sitting dated to a client. Usually it's when you're sending data to a client, and a lot of times you see this win. The client has requested a whole bunch of data and storing it. Process it as sequel server sends it back, and it can't process it fast enough. So sequel servers waiting to send more data to plan the next. We have three lakh types thes occur when secret servers waiting to retain the lock on an object. The I U I X and asked s represent different types of locks a sink io completion and I o completion. You see, when sequel server is waiting for a disk io operation to complete Paige i o latch s age and page i o lat latch e x. I mean secret servers waiting to use a buffer that is currently performing in IO operation . If you see large values for either of these weight types, this may indicate a problem with your dis up system. Right log. You'll see this way with sequel Server is waiting for the transaction log to be written to disk backup. Bio is the weight type you'll see when a backup processes waiting for something, either for more data to back up or for buffer to write the data into six. Pack it. Wait is away type. You'll occur when sequel server has split a query into different pieces and has passed those pieces out to different CP usedto Work on this is a process known as parallelism, and what happens is usually some CPUs will finish their part of the task before the others have finished. And so sequel server is waiting. Then, for all the individual pieces to be done before it can join all those pieces back together . And so this weight type C X packet tells you how long Sequel server has been waiting for all those CP used to finish 40. Troubleshooting: Creating A Performance Baseline: one of the best things that D be a can do is set up some routines that periodically gather performance information and log it to a table for historical reference. This allows you to see how the performance of your sequel server changes over time. That way, if you have a user coming you and say, you know the secret server seems really slow today, you can look at some current statistics and compare those within the same statistics taking in the past and see if the server really is running slower. I'm not gonna go into the details of setting up a robust monitoring system, but I will point out there are two types of data you could be gathering. And if you set up your own little DB a utility database on each server like I talked about in Section 14 you'll have a good place to store this data, the first type of data to gather on the weight type stats that we looked at in the previous module. Now these statistics are cumulative since the last time sequel server was restarted or since the counters were manually reset. This means, as you gather this data over time they were naturally increase. A good option is to have a job that manually resets the counters, then collects data for a period of time, say, one week and then repeats. This way you can see how the count is increased overtime for a one week period. There's an excellent excellent article about this and details about how to set this up over a sequel server center dot com. This is the article here. This is a link. I'm also included this in the supplementary files for this module. The other type of data that you might want to collect is Windows Performance Monitor counters. When sequel server is installed on a server, it adds a bunch of purple encounters to the system that can be used to monitor sequel server performance. The's counters includes such things, such as number of user connections, process, a queue length memory used, etcetera and a whole host of other items. If you bring up performance monitor counter on your server, you'll see there are many sequel server counters. Some here if you have a named instance that the stats we need MSC cool dollar sign instance name, and then this whole list of them and you can see Find a good one here and there. Buffer No. D. C. Database pages. So there are all these cameras. You can add page life expectancy. So these air counters for the sequel server engine itself. If you go down further, you'll see Sequel agent also has its own counters. So if you want to monitor sequel agent specifically, you can use this to monitor sequel agent So perf Maan is really handy for looking at how the system is performing at this moment, but you can also set up a routine together perform on data on a regular basis so you can track this data over time. You can write this information out to a table and then use that table to generate some nice graphs. And I have actually written a couple blocks, boasts about how to do this on my website, and again, I'll include these links. There's a part one part two and then a special part for performance counters on virtual machines. If you're running Virtualized sequel servers, 41. Conclusion: I'd like to thank you for purchasing this course. You've learned the basics of Microsoft Sequel server and some steps you can take to keep your servers running at peak performance. You've also learned basic troubleshooting steps to take. If you encounter a problem, I'd like to mention that the sequel server online community is very active and very helpful . So if you encounter any issues not addressed in this course, you can likely get help easily. The quickest method is to use Twitter. It sent a tweet using the pound sequel Help Pashto. You'll likely get a response within minutes. Of course, Twitter doesn't lend itself to in depth discussions. If you need more than 140 characters, I suggest posting issue to the appropriate form on sequel server central dot com. Other fantastic sources for information are the sequel skills dot com website in Britain. Those are dot com and, of course, my own blogger over at Shawn jay Stewart dot com Again, Thank you for listening, and please remember to read my course. Thanks