This section introduces the Enterprise Manager as a means of defining a database and its constituent tables. The following procedures are covered in detail:
· Create a new database and set database parameters.
· Create a Table and constituent fields.
· Define additional properties on a field.
· Add default and check constraints to a field.
· Define a primary key.
· Define a foreign key and set up referential integrity constraints between tables.
· Create a user defined data type.
· Generate a SQL Script for the database objects.
The Enterprise manager can be used to manage a number of SQL servers throughout the enterprise. Most options are made available by navigating the tree structure of the Enterprise Manager and right clicking on the desired option.
Each installation of SQL Server has several system databases:
· MASTER contains many configuration details of the server including details of the schema for each database. It is important that this database is backed up whenever changes are made to the structure of any of the databases as it is very difficult to repair a system if the MASTER database is missing.
· The MODEL is used as the template to create a new database.
· MSDB is used by the SQL Agent for holding details of scheduled jobs created to do housekeeping tasks such as backing up databases.
· TEMPDB is used for temporary storage during everyday use of the database. This includes temporary tables created in stored procedures and other working tables.
If you are working with large amounts of data you will want to increase the size of the TEMPDB database.
SQL Server is usually installed on a NT Server machine which runs SQL Server programs as a network service. The Client workstation makes a request to the SQL Server service running on the NT Server which performs the database retrieval before returning the required data to the workstation.
The database files are usually stored on the server machine and a database needs to be set up to hold the physical files for each database application. This task is often performed by the database administrator so that a new empty database is provided for the programmer to configure.
Each database has two physical files. One contains the data and the other contains the log. Data is written to the database and a record of each transaction is made in the log file. If the database file becomes corrupted, it can be restored from a backup and the transaction log rolled forward to restore the database.
It is recommended that the Database and the Log are stored on separate physical devices. This allows the database to be recreated from a backup and the transaction log if the physical disk containing the database files is corrupted.
Create a New Database
Provide an appropriate size for the database and preferably define a maximum size. Allow the database to grow by a reasonable amount each time so the system is not constantly redefining the database size. Place the transaction log on a separate physical disk dive if possible. The log is usually 10-15% of database size.
Create a new database
The CREATE DATABASE command can also be used to create a new database.
It is advisable to backup the MASTER database each time changes are made to any database on the server.
Tables are created by expanding the tree view for the database and right clicking on the TABLE tree to select the NEW TABLE option.
Create a Table with SQL Enterprise Manager
SQL Sever has predefined data types one of which must be used for each field:
· Integer: int, smallint, tinyint, bigint.
· Exact Numeric: decimal, numeric.
· Approximate Numeric: float, real.
· Money: money, smallmoney.
· Character data: char(n), varchar(n).
· Binary data binary(n), varbinary(n).
· Date and time: datetime, smalldatetime.
· Text and Image text, image.
· Other: bit, timestamp, sql_variant, user defined.
· Unicode: nchar(n). nvarchar(n), ntext.
· Identifier uniqueidentifier
SQL Server field types are copied from the MODEL database each time a new database is created and may be added to by defining user defined data types.
Selection of the correct data type is very important and requirements will vary according to the business requirements of the application:
· A market research database may have very large amounts of data and numeric field types should be selected with a view to the storage requirements. A SMALLINT integer field will need less space than a FLOAT field for example. Be careful when using TINYINT however as the maximum allowable value is 255.
· Numeric accuracy is important in financial applications and allowance should be given for the requirement to store fractions as decimals particularly for stock market applications. Some fractions may require many decimal places to be stored accurately. DECIMAL is often the most precise data type for numerical values.
· The MONEY data type is normally the best for storing currency values
· The VARCHAR data type allows character data of variable length to be stored at the cost of an extra bit for each value to store the width of the field. VARCHAR is not appropriate for very short field lengths or for fields where the width is relatively constant throughout the table (a single byte is used to store the length of each value).
· TEXT and IMAGE fields store data in 2K chucks by default to a maximum of 8,000 bytes. It is recommended that Null values are permitted if there are a large number of empty values in the table. In some applications it may be more efficient to store the data as external files and use network protocols to access them.
· Unicode data types use two bytes for each character and allow a variety of international characters to be stored.
Properties can be defined against each field. It is recommended that the ALLOW NULLS option is deactivated for each field to prevent problems when inserting records. A default value for each field should be defined in the properties for the field as shown below.
Other options allow the precise format of numeric fields to be set and unique identifier functions to be set on candidate primary leys for the table.
FORMULA is used to set up a calculated field and COLLATION is used to set a specific sort order on the field. These options are rarely used.
Tables are referred to with a four part qualifier:
SELECT * FROM servername.database.owner.table
The qualifiers can normally be omitted depending on the context. The following code ensures that the context is the PUBS database and so the database need not be qualified:
SELECT * FROM authors
Selecting information from a different database required that at least the database is qualified:
SELECT * FROM pubs..authors
Tables can be created by various database users and ownership is recorded for the table. This could lead to a situation where there are two tables with the same name but with different owners. The owner qualifier would need to be specified each time the table is referenced and this could lead to considerable confusion.
A system stored procedure can be used to change the ownership of an object as follows:
EXECUTE sp_changeobjectowner authors, dbo
It is recommended that all objects in a database are owned by the database owner: dbo.
New fields are added by entering the new field definition at the bottom of the window. The name of existing fields can be changed and their widths altered by changing the appropriate values. Be careful when shrinking the field size as data may be lost.
Defaults can be specified for a field by entering a constant in the column for defaults against the field. These are constant values specified for the individual field only.
Pressing the Save Table button saves any changes to the table. Definitions of keys and other table settings can be displayed with the Advanced Features option later in this chapter.
Relational database theory differentiates between a null value for a field and a zero value. SQL Server can determine whether a value has never been entered against a numeric value (a null) or whether a zero has been entered by the user.
Null values are important in relational theory but can cause problems in application development if not used correctly. Arithmetic and Boolean operation on Null values can yield unexpected results.
Specifying that nulls are not allowed will fail an insert transaction that attempts to add a record without a specific value entered against particular fields. This is useful for forcing entry of numeric values, foreign key or lookup fields, and status flags.
Null fields often cause confusion and it is a good idea to specify fields not to allow nulls and to specify a default value.
Defaults may be defined against a field to automatically enter a value when a new record is inserted into the table if the application has not entered a value.
Defaults can be useful with fields that are defined as NOT NULL as SQL Server will supply a value for the field if not specified by the application.
Defaults may be set by typing a value in against the DEFAULT field property when defining a field with the DESIGN TABLE window. The default must be specified as a constant value.
Specifying a Default Value
Defining a default value with the DESIGN TABLE window will automatically create a default constraint on the table. A field default constraint can also be added to (or dropped from) a table with a SQL data manipulation command as follows:
ALTER TABLE [authors] WITH NOCHECK ADD
CONSTRAINT [DF_authors_state] DEFAULT ('CA') FOR [state]
Check Constraints allow the definition of a simple piece of logic to check the values to be entered in a field. The AUTHORS table could be altered to prevent further entry of any authors from Texas by setting a constraint which prevented the state field from being set to TX.
Check constraints may be entered with the SQL Enterprise Manager by selecting the appropriate Table object and rightclicking to edit the table. Pressing the ADVANCED FEATURES push button will bring up a Page which allows various constraints to be set. The Check Constraints page allows for simple checks to be made on the data.
Specifying a Check Constraint for a Table
SQL Server will not allow a value to be entered or modified that conflicts with the check constraint and will generate an appropriate error message if an attempt is made to violate the constraint:
Error Generated by SQL Server when the Constraint is violated
Constraints are new with SQL Server 6.0 and replace the previous notion of defining Defaults that are bound to fields. They are compatible with the latest SQL ALTER TABLE syntax.
ALTER TABLE authors
ADD CONSTRAINT CK_authors_city CHECK( city<>'gotham')
Primary keys can be created easily from the Table design window by using selecting the required field and using the SET PRIMARY KEY button represented by the key shape at the top of the table design window. Multiple fields may be selected to create a composite Primary Key.
Set the Primary Key in the Table Design Window
SQL Server automatically defines the required primary key index. The penultimate button on the right of the Table Designer toolbar allows the user to MANAGE INDEXES/KEYS.. and can be used to refine the primary key definition.
Fields that allow Nulls may not be specified as Primary Keys.
Field properties can be used to define Integer fields as an Identity column which is automatically incremented each time a new record is added to create a unique value suitable as a primary key on a table. An initial seed value and an increment can also be set.
Specifying Identity Column Properties
Identity columns are typically used as primary keys and have the advantage of being small in size and therefore fast for SQL Server to use when joining tables.
Identity columns also provide a degree of data independence and can be a better design option than alternate candidate keys. An employee table, for example, has both the Staff Identifier and the National Insurance Number of the person as candidate keys. However the person cannot be added into the table until a Staff Identifier has been allocated causing problems in entering data before the person actually starts work. Similarly a National Insurance Number may not be immediately available or may change, forcing key values to be cascaded down any dependent tables. An independent identity column is a better primary key than either of the obvious candidate keys.
There are some useful system functions for obtaining information about identity columns.
· The @@IDENTITY system variable will indicate the value of the primary key of the previously inserted record.
· IDENT_CURRENT('person') will return the latest identity column value for the person table. Similarly IDENT_SEED and IDENT_INCR will return the seed and increment for the required table.
· DBCC CHECKIDENT('person') will check that the values in the identity column are correctly defined for the table.
· SET IDENTITY_INSERT OFF is required when inserting records where the value of the identity column is already known.
Only one identity column is allowed for each table and can be specified in a select statement as follows:
SELECT IDENTITYCOL, surname FROM person
There are some problems with Identity Column keys particularly if a table is distributed over several servers and needs to be replicated. The UNIQUEIDENTIFIER data type has similar properties but is a 16 character globally unique identifier that is automatically defined by setting the default value to NEWID() in the field properties.
Unique Identifiers can be represented with the field name or the ROWGUIDCOL keyword in a SELECT command:
SELECT ROWGUIDCOL, surname FROM person
UNIQUEIDENTIFIER fields are larger than integer fields and care should be taken using them on very large tables.
An index and primary key constraint is automatically created when a primary key is defined using the Table Designer. The MANAGE INDEXES/KEYS.. button on the right of the Table Designer allows all indexes, including primary keys, to be defined in more detail.
The Primary Key is comprised of one or more fields that do not allow null values. Composite Keys can be defined by selecting a second column for the Primary Key. Select the Clustered option if the table is to be physically ordered in the sequence of the Primary Key.
Defining a Primary Key with the Manage Indexes/Keys..Window
ALTER TABLE person
ADD CONSTRAINT PK_person
PRIMARY KEY CLUSTED (id)
Foreign Keys are the other half of a relationship between tables and link a child table to a parent table. The Foreign Key value should match directly to the value of the Primary Key.
Foreign Keys can be defined in SQL Server to automatically maintain the referential integrity of the table. Select the MANAGE RELATIONSHIPS option in the DESIGN TABLE window to define a Foreign Key.
The fields for the Primary Key Table and the Foreign Key Table are entered in the respective columns on the RELATIONSHIPS page of the TABLE window to create the relationship.
Enforcing the relationship for INSERTS and UPDATES will create the referential integrity constraint so that no OWNER of a CAR can be entered without a corresponding record in the PERSON table.
Defining a Foreign Key
A cascading delete will automatically delete linked records in the Foreign table if the record in the Primary table is deleted. Similarly, a cascade update allows a key value to be changed in both the Primary and Foreign tables if the value of the identifier changes.
Triggers are no longer required to perform cascading deletes
ALTER TABLE car ADD
CONSTRAINT FK_car_person FOREIGN KEY
( owner ) REFERENCES person ( id )
ON DELETE CASCADE
SQL Server allows the definition of user defined data types within a database. These can be useful to prevent inconsistencies in large database schemas where similar fields occur many times. A user defined datatype might be defined for telephone and fax numbers, for example, to make sure all occurances where of the same width throughout the database.
The outline view of the SQL Enterprise Manager can be used to define a user defined data type. The datatype is then available for use when defining fields with the Table Designer.
User Defined Data Type Window
Alternatively, system stored procedures can be used to define or drop user defined types from the database:
EXECUTE sp_addtype udtphone varchar(20)
EXECUTE sp_droptype udtphone
EXECUTE sp_help udtphone
User defined datatypes may be defined in the MODEL database and are then automatically copied into each new database.
These features were in popular use in earlier versions of SQL Server but have now been replaced with the use of constraints that have the advantage of being ANSI compatible and easier to define. Their use is not recommended.
A collection of default values may be defined independently in the database and then bound to individual fields or to a user defined data type. Unfortunately, it seems that changing the value of the default requires all the default and all bindings to be dropped and recreated. The use of defaults has been replaced by ANSI compatible default constraints.
Simple validation rules (that reference only constant values) can also be defined in the database and bound to user defined data types or directly onto a field. These are now replaced with ANSI compatible check constraints.