Tuesday, April 28, 2009

BLOB Fields with NAV & SQL

BLOB = [Binary] Large OBject

Well, several tables in MS Dynamics NAV use such "BLOB"-fields to store binary data like pictures. This could be quite convenient for the application, for example for creating Item catalogues displaying such pictures, etc. but from a NAV/SQL performance perspective BLOB could also be a real pain – I'd like to explain why, and what I recommend to do to avoid the problems …

 

The basics

A NAV "BLOB" data-type is actually the "image" data-type in SQL Server (SQL also features other LOB types as "text" or "ntext" but they aren't relevant for NAV).

[ Out of the box, NAV flags a "BLOB" field with the property "Compressed" = TRUE. If this is enabled, NAV uses a compression algorithm to save the data more compact. Have in mind that SQL Server does not recognize this "compression", so if you're using BLOB to transfer binary data from NAV to other SQL systems – which could be very smart, but that's a different story! – you should make sure to set "Compressed" to FALSE. But anyway … ]

By default, every table in NAV has a "Clustered Index" – and even though this is not a necessity for SQL Server, this is a very good idea! The general rule is: "Every table has a Clustered Index". Period.

An Index is an internal balanced-tree (B-tree) like structure, helping to retrieve records quickly. Every SQL Server table could have multiple indexes. The b-trees exist of "index pages" ("root nodes", "index nodes" and "leaf nodes") which store the indexes values. The difference between a "Clustered Index" (CI) and a "Non-Clustered Index" (NCI) is, that the "leaf node" level of a CI represents the table itself. Means: while all other index nodes only contain the indexed valued – e.g. "No." of table "Item" – the "leaf nodes" include all the other fields – e.g. "Description", "Item Category", etc.. Further, the CI defines how the data is physically stored and sorted within the table.

Hence, each table could have one CI and multiple NCI. By default, the CI is generated on basis of the "Primary Key" in NAV (which is mostly a good idea), all other "Keys" – if enabled and maintained - will be created as NCI.

As mentioned, the CI contains all fields of a table. Really all? No, not the "BLOB" fields. Fields of type "image" in SQL can store up to 2 Gigabytes of data (which requires about 256.000 "pages" x 8 KB). So if that LOB data would be included into the CI, this could dramatically increase the index size and severely degrade its performance is all aspects! That's why LOB fields are stored differently within the database, not as part of the "leaf nodes" of the CI (of course there are internal references).

 

The problem …

… in NAV is, that the "BLOB" fields are always read with any query on the related record. In SQL Profiler one could see those queries, for example when reading from the "Item" table:

SELECT *, DATALENGTH("Picture") FROM "Navision"."dbo"."CRONUS$Item" WHERE …

The "SELECT *" queries all fields from a table, thus, also the BLOB field(s). With "DATALENGTH" additionally the size of that "BLOB" is determined (C/SIDE uses this info to deal with the field). As the BLOB field is not part of the CI "leaf nodes", SQL Server has to perform additional internal operations to gather the LOB data. Due to the specific storage of that "image" fields, the data could be "spread" widely within the database (it cannot be granted that a LOB field is stored within one continuous array of pages), thus SQL Server has to perform a lot more "page reads" then (shown in SQL Profiler as a unusual high number of "Reads", e.g. >= 1000).

And as always: a high number of "Reads" means a high consumption of cache space and CPU and may also result in physical I/O. Too many "Reads" waste precious system resources and degrade performance!

So much about the technical side. But the question should also be: Why always reading the BLOB fields anyway?

In NAV there are a lot of important and heavily used tables which are "affected" by BLOB fields, as "Item", "Customer" or "Vendor" (and maybe "Contact" and others). For example, the "Item" table is permanently queried when looking up items, creating an order, posting a shipment, posting an invoice, posting an item journal etc.. Within all the processes and transactions NAV fires a SELECT * on the table, reading the "Picture", too.

But what is the need for a "Picture" when posting an order or processing physical inventory? There is none! Nope. Zip. Nada.Pointless.

But using "BLOB" fields in such tables could be a real problem regarding performance.

 

The solution

Of course, the solution cannot be to abstain from using BLOB fields. Again: using BLOBs could be very smart, not just for storing pictures, but maybe also for XML data or documents.

To avoid performance issues it is just necessary, to keep BLOBs away from business processes, and this could be done rather simple: Storing BLOBs in separate tables!

For example, when it is about the "Picture" in table "Item", just create a new NAV table e.g. called "Item Picture", existing of just two fields: "Item No." (Code20) and "Picture" (BLOB). Once this new table exists, the "Picture" could be transferred quite simple with some NAV programming – or, even faster, with some TSQL:

USE [Navision] -- change db name here

GO

INSERT INTO [dbo].[CRONUS$Item Picture] ([Item No_], [Picture])

SELECT [No_], [Picture] FROM [dbo].[CRONUS$Item] WHERE DATALENGTH([Picture]) <> 0

GO

If the data has been transferred, the "Picture" could be erased from the old "Item" table – either by some C/AL code, or this TSQL:

USE [Navision] -- change db name here

GO

UPDATE [dbo].[CRONUS$Item] SET [Picture] = NULL WHERE DATALENGTH([Picture]) <> 0

GO

(use the TSQL at own risk – no warranties or anything)

Now the "Item" table is cleaned up – de-BLOBbed so to speak - so all kinds of business processes should not encounter trouble related to BLOB Reads. Then of course it is necessary to change the application, now looking up the "Picture" from the new table "Item Picture", but actually that's a piece of cake (e.g. ItemPicture.GET(Item."No.");). Have in mind that it isNOT necessary to disable the BLOB field in the table – just not using it is all it takes.

I highly recommend to implement this wherever BLOB is used in "transactional tables"! It's just a minor effort, but could have remarkable impact on NAV/SQL performance. Further, to keep the BLOB's performance OK it is necessary to periodically maintain such tables with an ALTER INDEX REORGANIZE WITH LOB_COMPACTION (SQL 2005/2008) or DBCC INDEX_DEFRAG (SQL 2000) – see "Books Online" about details.

Unit Cost used in Negative Adjustment

If you are using Average Cost, Negative Adjustment will take the average cost on the day the negative adjustment is valued based on posting date and valuation date).

For example, when you do a Negative Adjustment in Item Journal, Navision will default the unit cost with the Unit Cost from Item Card. It doesn't matter whether you accept the default cost suggested by Navision or change the unit cost to other cost, because the Adjust Cost - Item Entries batch job or the automatic cost adjustment function will adjust the unit cost back to the average cost based on the valuation date. Therefore, no matter what cost you put in the Item Journal during Negative Adjustment will not affect the cost used in the negative adjustment.

If you are using Standard Cost, the unit cost in Item Journal will be defaulted to Unit Cost in Item Card, which is the current standard cost. Later, when the adjust cost function is executed, the unit cost posted by the negative adjustment will be adjusted to the unit cost of the positive entry which the negative adjustment is applied to. Let's consider the following example,

Jan 01, 2008 - Item A is created and standard cost is set to $10.
Jan 02, 2008 - Purrchase 10 units of Item A with $10.
Feb 01, 2008 - Change standard cost of Item A to $12.
Feb 05, 2008 - Purchase 10 units of Item A with $12.
Feb 07, 2008 - Negative 2 units of Item A.

So, on Feb 07 2008, when 2 units of Item A is entered into the Item Journal, Navision will default the unit cost to $12. Accept the cost and post the Negative Adjustment. The Negative Adjustment will be posted with $12. You can verify this by checking the Item Ledger Entries and Value Entries. Now, run the Adjust Cost - Item Entries batch job. After the Adjust Cost batch job completed successfully, check the Item Ledger Entries and Value Entries again. You should be able to see an adjustment entry with -$2 has been added to the Value Entries table to adjust the unit cost to $10.

The Negative Adjustment will take $10 instead of $12 as unit cost because the negative adjustment entry has been applied to the purchase entry created on Jan 02 2008, which is $10.

Top 30 'Top X List' Websites

What really get on my nerves, even more so than Joe Pasquale stupid song, are the often too popular sites that keep on producing their "Top X Lists". What happened to good old original Content is King?. So if you can't beat them, join them. Here is my Top 30 list of websites that quite often product Top X lists of crap...

  1. Smashing Magazine
  2. Oddee
  3. Web Designers Pot
  4. Six Revisions
  5. 1st Web Designer
  6. Noupe
  7. Mashable
  8. Specky Boy
  9. Naldz Graphics
  10. Net Tuts+
  11. Smash!ng Apps
  12. Outlaw Design Blog
  13. [Re]encoded
  14. Hong Kiat
  15. PXLshots.com Blog
  16. Design Reviver
  17. WebUpon
  18. Instant Shift
  19. Web Urbanist
  20. Photoshop Roadmap Blog
  21. Digital Labz
  22. Freelance Folder
  23. abduzeedo
  24. Web Design Ledger
  25. Toxel
  26. Just Creative Design
  27. Spyre Studios
  28. Listphobia
  29. Digital Photography School
  30. Life Hacker

Default trace or Change Log in SQL Server

We have all been subject to or know someone who has been in a situation where an object has been altered/created/deleted, without our knowledge, and the application comes to a screeching halt. After fixing the problem, your boss asks you some questions, like what happened, why did it happen, and who did it. SQL Server 2005 introduced a new type of trigger called a DDL trigger that can provide all the answers we need; however, you did not get a chance to implement this functionality. So... what do you do?

Some would tell their boss "I do not know, but I can find out" and then search franticly for 3rd party tools to read the transaction log, hoping for instantaneous salvation. What these few do not know is an answer is silently running in the background. SQL Server 2005 has built in functionality that gives administrators the answers to all these questions.

The answers lie in a new background trace called the default trace. The default trace is exactly what the name specifies, a trace. Default trace is always running in the background of your instance capturing events that administrators can use to troubleshoot problems. The default trace is enabled by default and does not burden the system because it is fairly lightweight. Chances are you had not even noticed this trace running on your instance. To those concerned about overhead, yes there is overhead, but in my mind the benefits far outweigh the minimal overhead. The default trace is not intended to replace DDL trigger functionality and should be used as a means to monitor an SQL Instance, or quickly obtain detailed information about problematic events.

The default trace does not capture all trace events, but captures enough information to become a powerful tool in your toolkit. The default trace captures key information including auditing events, database events, error events, full text events, object creation, object deletion and object alteration. From my experiences and observations on forums, I will be focusing on object level events. It seems that a greater number of people want the "who done it" answer for object DDL events.

The first piece of code is to check the default trace to see if it is enabled.

SELECT * FROM sys.configurations WHERE configuration_id = 1568 

If this feature is not available, you will have to configure the advanced option "default trace enabled". Below is the code to enable the trace. Note: you will need the ALTER SETTNGS permission or be in the sysadmin or serveradmin fixed server role to reconfigure.

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'default trace enabled', 1;
GO
RECONFIGURE;
GO

The next piece of information we need is the default trace file path, and the function below will return the current trace file. You can grab the initial trace file (log.trc) and rollup every trace file into a single table, but there is a higher overhead associated to bringing more data in. You should use the trace file that best represents the information you are looking for.

Note: the path is defaulted to the \MSSQL\LOG directory, but we can use the function below to get the path

--get the current trace rollover file
SELECT * FROM ::fn_trace_getinfo(0)

Now that we have all the information we need we can get into the trace data. Let's start by creating a new database call TraceDB.

USE [master] GO CREATE DATABASE TraceDB 

Now open the trace file, as shown below. As you can see, we were able to gather some pretty significant information about who created the database and when the database was created. I have used category id of 5 and a trace_event_id of 46 to filter the data correctly. Event ID 46 represents Object:Created and category 5 is objects. I will provide queries that list all events and categories at the end of this article.

** Make sure to use your trace file path below. Yours may be different than mine.

SELECT
loginname,
loginsid,
spid,
hostname,
applicationname,
servername,
databasename,
objectName,
e.category_id,
cat.name as [CategoryName],
textdata,
starttime,
eventclass,
eventsubclass,--0=begin,1=commit
e.name as EventName
FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\log.trc',0)
INNER JOIN sys.trace_events e
ON eventclass = trace_event_id
INNER JOIN sys.trace_categories AS cat
ON e.category_id = cat.category_id
WHERE databasename = 'TraceDB' AND
objectname IS NULL AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 46
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj
  • You will see more than one entry per object create because these objects have two event sub classes -begin and commit. Each subclass will have an entry.
  • You can remove the databasename filter to get object creation events for all databases.

Results (Trimmed for Simplicity):


Now, we have seen what default trace is capable of. Let's create another object and repeat the query. This time around we are going to create a table called "MyTable". Use the following code to create the table.

USE [TraceDB] GO CREATE TABLE [dbo].[MyTable](
[id] [int] IDENTITY(1,1) NOT NULL,
[sometext] [char](3) NULL
) ON [PRIMARY]

Now query the default trace using the same query as above. Note you can use the ObjectName column to filter for the specific object you are looking for; otherwise all created database objects are returned.

WHERE databasename = 'TraceDB' AND
objectname = 'MyTable' AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 46
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj

Results (Trimmed for Simplicity):


Let's take the demo a step further by altering MyTable. Issue an alter table statement and add a new column to MyTable, as shown below.

USE [TraceDB] GO ALTER TABLE MyTable
ADD col INT

We can now search trace information on the alter event for MyTable. We can use the same query as before but need to make a small modification. You must change the trace_event_id to 164 because event 164 represents the object:Altered event.

WHERE databasename = 'TraceDB' AND
objectname = 'MyTable' AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 164
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj

Results (Trimmed for Simplicity):


Now lets drop MyTable and view the trace details. You must change the trace_event_id to 47 because event 47 represents the object:Deleted event, as shown below.

USE [TraceDB] GO  DROP TABLE MyTable 

We can view trace data by changing the trace_event_id to 47.

WHERE databasename = 'TraceDB' AND
objectname = 'MyTable' AND --filter by objectname
e.category_id = 5 AND --category 5 is objects
e.trace_event_id = 47
--trace_event_id: 46=Create Obj,47=Drop Obj,164=Alter Obj

Results (Trimmed for Simplicity):


As you can see, default trace gives an administrator the ability to find the history of any DDL transaction. I want to point out that default trace is not limited to object DDL history. Among other things, default trace captures log growth events, which can be invaluable to troubleshooting disk capacity problems.

For example, say your log file spontaneous grows enormous. It is important to understand why the log grew spontaneously. No one would argue that one of the first place to look may be SQL Jobs. There are many commands within a job that can potentially cause the log to grow enormous, like reindexing, bulk inserts, bulk deletes etc. By using the trace data you can more easily identify the problem because you can pin-point the exact time the log file began to grow. This greatly reduces the number of possible suspects, which reduces the amount of time required to find the culprit.

The query below will pull all trace data using the log auto growth event. Note: You will not have any log growth for TraceDb because we have not done in large inserts to make the log grow. You should apply this query to another database where you want to monitor log growth.


SELECT
loginname,
loginsid,
spid,
hostname,
applicationname,
servername,
databasename,
objectName,
e.category_id,
cat.name,
textdata,
starttime,
endtime,
duration,
eventclass,
eventsubclass,
e.name as EventName
FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\log.trc',0)
INNER JOIN sys.trace_events e
ON eventclass = trace_event_id
INNER JOIN sys.trace_categories AS cat
ON e.category_id = cat.category_id
WHERE databasename = 'TraceDB' AND
e.category_id = 2 AND --category 2 is database
e.trace_event_id = 93 --93=Log File Auto Grow

Summary:

The default trace is a valuable tool for the modern DBA's tool belt. It offers a wealth of information, while minimally impacting the system. The default trace is not a widely publicized feature of SQL Server 2005, but is slowly gaining fame. The default trace gives administrators the ability to get detailed information about auditing events, database events, error events, full text events, object creation, object deletion and object alteration events. With this much information at their fingertips, administrators are more productive and can more easily identify problems in a production environment. My recommendations are to look through the events and see what information already exists for your instances. Default trace should not only be used reactively but proactively. A proactive mentality will reveal small problems before they escalate to bigger problems.

Event and Category Queries

--list of events
SELECT *
FROM sys.trace_events
--list of categories
SELECT *
FROM sys.trace_categories
--list of subclass values
SELECT *
FROM sys.trace_subclass_values
--Get trace Event Columns
SELECT
t.EventID,
t.ColumnID,
e.name AS Event_Descr,
c.name AS Column_Descr
FROM ::fn_trace_geteventinfo(1) t
INNER JOIN sys.trace_events e
ON t.eventID = e.trace_event_id
INNER JOIN sys.trace_columns c
ON t.columnid = c.trace_column_id

Monday, April 27, 2009

All the Tables from All Databases

sp_msforeachdb 'select "?" AS db, * from [?].sys.tables'