Wednesday, April 27, 2011

VB and VB.NET - Difference


This is an old issue but very frequently we come across the same query. Particularly when we require the application (OCX) migration from VB.

What  are some of the basic differences between VB & VB.NET?
Answer:

The following section enlists some of the issues I came across when encountered with a similar migration requirement. These differences are mostly in reference to VS2005.

1. VB.NET provides managed code execution that runs under the Common Language Runtime (CLR) resulting in robust, secure and stable applications. All features of the .NET framework are readily available in VB.NET.

2. VB.NET is entirely object oriented. This is a major addition that VB6 didn’t have.

3. The .NET framework comes with ADO.NET, which follows the disconnected paradigm, i.e. once the required records are fetched the connection no longer exists. It also retrieves the records that are expected to be accessed in the immediate future. This enhances Scalability of the application to a great extent.

4. Error handling has changed in VB.NET. The Try-Catch-Finally block has been introduced to handle errors and exceptions as a unit, allowing appropriate action to be taken at the place the error occurred thus discouraging the use of ON ERROR GOTO statement. Code maintainability is enormously helped with this addition.

5. The Event Log Model, OLE automation and DDE is not available in VB.NET. We need to use other kind of inter-process communication instead of DDE & OLE.

6.The properties, events and attributes for Form, Controls (list box, command button i.e. mask colour property etc.) has undergone a comprehensive change. It is better to refer to the msdn for further details. eg. The FileCopy replaced by File.Copy, Kill method replaced by File.Delete etc.

7. The property bag approach to persist data between various instances in VB is completely taken over by the Serialization concept.

8. VB.NET is free threading as opposed to the VB single-threaded apartment feature. In many situations developers need spawning of a new thread to run as a background process and increase the usability of the application. VB.NET allows developers to spawn threads wherever they feel like, hence giving freedom and better control on the application.

9. Security has become more robust in VB.NET. In addition to the role-based security in VB6, VB.NET comes with a new security model, Code Access Security (CAS). This security controls on what the code can access. For example you can set the security to a component such that the component cannot access the database. The CASPOL.EXE is the application with which this can be controlled to the widest extent. This type of security allows building components that can be trusted to various degrees.

10.The CLR takes care of garbage collection i.e. the CLR releases resources as soon as an object is no more in use. This relieves the developer from thinking of ways to manage memory. CLR does this for them.

11. VB 6.0 supported Null propagation — when Null was used in an expression, the result of the expression would also be Null. Null propagation is no longer supported in VB.NET.
In Visual Basic 6.0, the Null keyword indicated that a variable contained no data and the IsNull function was used to test for Null. In Visual Basic .NET, the Null keyword is a reserved word and has no syntactical value. The IsNull function is no longer supported.
Null is converted to DBNull, and IsNull is converted to IsDBNull. The behavior of DBNull is slightly different than that of Null. Null could be used in functions and assignments; DBNull cannot.

Sunday, April 17, 2011

SEO - Directory Submission

It was very recent days that I became very interested in writing myself down. Now, in todays world it is very much presumable that I would do the entire writing stuff online. Definitely being involved with IT related work, it is very much expected from me as well. In the above, process I noticed the importance of SEO and its one of the techniques called Directory Submission.
A short note of my understanding on this topic.

Introduction
The key objective of every web site owner is to increase the amount of traffic directly received. This traffic is increased by improving their rank with search engines like as Google and Yahoo! The search engines determine the position of the website in a search on the basis of a formula. One of the key factors of this formula is the number of back links a web site has. The quality of incoming links improves the search engine rankings and internet visibility of a website. Directory submission is a proven method for gaining quality one-way incoming back links.

Use of Directory Submission

Directory submission is a SEO technique that helps in generating online traffic to a website. It works by gaining quality backlinks and higher positions in the SERPs. It is one of the most vital parts of the link building campaign. The process works by submitting to a directory by filling out a few forms. Thereafter, directory owner either accepts or rejects the entry. Usually, the entry is accepted. After acceptance, the directory owner provides a quality back link for that web site.

Ways of Directory Submission

Automated Directory Submission:
An easier way of doing this is through automated directory submission, which can be done through tools. For example, an automatic directory submission tool named DirectorySubmissionTools.com provides one-click directory submission to hundreds, even thousands of directories. However, it charges a relatively small fee like USD 9.99 per credit to submit to 1,400 directories.
Some of the downfalls of automated directory submission are:
1.        Automated submission often puts websites in wrong categories. This effects search engine ranking badly.
2.        Automated submission misses on some site submission verification codes. These are usually those in top directories.
3.        Automated submission software submits in many directories at one time. This may get the website banned by the search engines for spamming.
4.        Most automated submission software are not able to support different title and description.
5.        Automated submission usually does not check if a directory it submits to, is still alive or dead. This may result in submission in dead directories.

 Manual Directory Submission:
The manual submission is more difficult, however it is the better option as many web directory sites does not allow automated submissions from software. Such web directory sites ensure that only real human beings make such postings on their directory.

Types of Directories for Submission

·          Paid Directory Submission: A paid web directory provides premium service with faster approval times. These sites either deliver high quality traffic or good PageRank. Usually, these are less crowded providing greater chance to a link to achieve higher PR page. A list of paid directories can be found at:

·          Niche Directories: Submitting a website to a niche directory provides a link from related niche with relevant content. A list of niche directories can be found at: http://www.directorycritic.com/niche-directories.html

·          Free Directory Listing: A comprehensive list of free directories can be found at: http://www.best-web-directories.com/free-directories.htm

Benefits of Directory Submission

Some of these benefits of Directory Submission are:
·          Increase of Backlinks: The main purpose of directory submission is link building. The increase of backlink is the biggest benefit of web directory submissions.
·          Search Engine Indexing: Search engines crawl to the website by following the back links on the directory sites. This increases the chance of the website being indexed in search engine databases.
·          Frequent Bot Visit: In order to archive, search engine robots visit all the sites on the web to gather information. Increase in the number of backlinks increases the frequency of the visits of search engine robots. This improves the internet visibility of the website by enhancing its rank.
·          Increase in online traffic: Many people browse web directories for relevant contents with specific keywords or search terms. This takes them to the listed websites with related contents. The listing of website increases the internet visibility.
·          Anchor Text: A desired anchor text can be used as a tile which is eventually hyperlinked to the website. This provides the website an anchor text backlink from the directories. Directory submissions using the anchor text as keyword increases link popularity.
·          Link Exchange Partner: Web directories are the best place to find link exchange partners. Here related sites are categorised. By visiting related sites one can check if they have any link exchange programs. However, link exchange is not used by many webmasters these days due to its reduced effusiveness.

Monday, April 11, 2011

Teradata - Month End Date

Sql Server:
Calculating the last day of the month in SQL Server is pretty simple.
Run the following piece of code and its done.
SELECT DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,GETDATE())+1,0))
CurrentMonthEnd_Day
ResultSet:
CurrentMonthEnd_Day
———————–
2010-08-31 23:59:59.000


Teradata:
However, with reference to Teradata SQL, in our day to day queries in a datawarehouse environment very frequently we need to play with the month end dates that are mostly kept in the date dimension table with one field, i.e. the month end date field.
You might require to fetch the month end date which is 4 years back. usually the simplest way in this case will be to calculate 48 months back from the current month end date (in case the monthe end date that long is not stored into the database). This is fair. The piece of code required to do is simple.
SELECT (ADD_MONTHS(CAST('2012/08/31' AS DATE),-48))
Result:
2008/08/31
Now trying this simple thing will not work if its a leap year anywhere:
i.e. SELECT (ADD_MONTHS(CAST('2012/02/28' AS DATE),-48))
Result:
2008/02/28 -- Wrong.
Solution?? simple.
SELECT (ADD_MONTHS(CAST('2012/02/28' AS DATE) + 1,-48)-1)
Result:
2008/02/29 -- Correct.
This would definitely work with others. e.g.
SELECT (ADD_MONTHS(CAST('2012/08/31' AS DATE)+1,-48)-1)
Result:
2008/08/31

Saturday, April 9, 2011

Managing Memory


Let us discuss on some on some aspects of performance tuning and optimization of sql queries. The following terms are quiet relevant in this regard. Although we do not always think about these counters unlesss we are seriously having issues with the sql server memory or the performance of a strong query.
Buffer Cache Hit Ratio
 The buffer cache is a syndicate or collection of buffer pages. The data pages occupies the biggest part of the memory and pages are mostly read into this cache. In OLTP systems this counter value should be as high as possible. Ideally, 99 percent in most cases. It is exactly the reverse in OLAP or DW environment where cache requirement although is there but as most of it is bulk data (billions of data records), cache requirement is less. Also frequency of requests coming in can be much lower than e.g. transactions in bank. If Buffer cache hit ratio value is low that means, only a few requests could be served out of the buffer cache, with the rest of the requests being served from disk.
 If this truly happens, then definitely the SQL Server is warming up or the memory requirement of the buffer cache is more than the maximum memory available.
Lazy Writes/Sec
The Lazy writes/sec counter records the number of buffers written each second by the buffer manager's lazy write process. In this process the aged buffers are removed from the memory and the memory is freed up for other usage. The Lazy writes/sec values should consistently be less than 20 for the average system. A dirty, aged buffer is one that has all the changes and should be written to the disk. Higher values on this counter possibly indicate I/O issues or even memory problems.
Page Life Expectancy
 Page Life Expectancy indicates how long a page will stay in the buffer syndicate without being referenced. A low number for this counter means that pages are being removed from the buffer. This lowers the efficiency of the cache and indicates the possibility of memory stress. On reporting systems, as opposed to OLTP systems, this number may remain at a lower value since more data is accessed from reporting systems. A reasonable value to expect to see here is 250 - 300 seconds or more.
 Checkpoint Pages/Sec
 The Checkpoint Pages/sec counter represents the number of pages that are moved to the disk by the checkpoint operation. These numbers should be low e.g. less than 30 per second for most systems. A higher number means more pages are being marked as dirty in the cache. When a page is modified in the memory it gets treated as a dirty page and is written into the disk by the next checkpoint operation. I/O problems might be a result of high values on this counter (which indicates a larger number of writes occurring within the system).


You can refer to the following thread or read more on how to fetch these values/counters

Friday, April 8, 2011

Technology Day @Microsoft

It was some long time back..almost 2 years back I attended a session from my company. I understand technology is travelling to fast for us to keep par with. Still a small note:

Microsoft Innovation Day
(Full Day Session)

Topics:

v    Ajax
v    Query Tuning
v    Live Mesh



The key characteristics of SaaS software, according to IDC, include:
a) Network-based access to, and management of, commercially available software.
b) Activities are managed from central locations rather than at each customer's site, enabling customers to access applications remotely via the Web.
c) Application delivery, typically is closer to a one-to-many model (single instance, multi-tenant architecture) than to a one-to-one model, including architecture, pricing, partnering, and management characteristics.
d) Centralized feature updating, which obviates the need for downloadable patches and upgrades.
e) SaaS is often used in a larger network of communicating software - either as part of a mash up or as a plug-in to a platform as a service. Service oriented architecture is naturally more complex than traditional models of software deployment.
f) SaaS applications are generally priced on a per-user basis, sometimes with a relatively small minimum number of users and often with additional fees for extra bandwidth and storage. SaaS revenue streams to the vendor are therefore lower initially than traditional software license fees, but are also recurring, and therefore viewed as more predictable, much like maintenance fees for licensed software.

Software as a Service (SaaS, typically pronounced 'sass') is a model of software deployment where an application is hosted as a service provided to customers across the Internet. By eliminating the need to install and run the application on the customer's own computer, SaaS alleviates the customer's burden of software maintenance, ongoing operation, and support. Conversely, customers relinquish control over software versions or changing requirements; moreover, costs to use the service become a continuous expense, rather than a single expense at time of purchase. Using SaaS also can conceivably reduce that up-front expense of software purchases, through less costly, on-demand pricing.


An internet scale cloud services platform hosted in MS Datacenters, which provides an operating system and a set of developer services that, can be used individually or all together. The Azure platform is a similar kind of technology provided by Microsoft using Cloud Computing. Some examples are WindowsLive, OfficeLive and CRM Online etc.

Cloud Computing is a paradigm in which information is permanently stored in servers on the Internet and cached temporarily on clients that include desktops, entertainment centers, table computers, notebooks, wall computers, handhelds, sensors, monitors, etc.



Webslice is an integral part of AJAX. This enables publishers to mark up or subscribe a part of a web page, allowing you to monitor your favourite dynamic pieces of the web. Thus on an AJAX developed page you need not press F5 to refresh the whole page every time you need to update.
The other key features are:
ü      Script Combination
ü      History Support
ü      REST services
ü      MVC & Dynamic data support
ü      jQuery – Independent Library
ü      Flexible DOM Manipulation


a)    Coding Issues
ü     Excessive scanning
ü     Poor / Missing Indexes
ü     Out of date table statistics
ü     Non selective triggers
ü     Where clause not limiting record set
ü     Excessive recompiles
ü     Long blocking durations
b) WHERE Clause
ü     Limit number of columns
ü     Avoid NOT, OR
ü     Avoid redundant criteria
ü     Avoid calculated columns
ü     Avoid functions
ü     Avoid pattern search

c) JOINS
ü     Avoid join hints
ü     Join on indexed columns only
ü     Avoid bookmark lookups
ü     Avoid linked servers
ü     Select criteria from table with fewest records



The managed code in .NET environment as well as the unmanaged code interacting with the operating system and the machine (hardware) can be well debugged using these tools. The ILAsm and ILDasm are very powerful in debugging the .dll files. The SoS can help in detecting memory leaks very efficiently.

ü      NGEN
ü      SoS
ü      ILAsm
ü      ILDasm
ü      Fusion Viewer
ü      PEVerify
ü      FxCop
ü      JUnit





The system is an implementation of cloud computing technology. You can keep your computable electronic devices all synchronized using this mesh. Thus, whether you are on your system or mobile you can keep all your devices in synch. Log on to http://www.mesh.com/ to know more about this.
Create a windows live profile, in case you do not have already and login to see the magic.


The key features are:
ü      IE7 Emulation mode
ü      Conditional Commenting
Open any HTML page, right click and view source. In case it’s compatible with IE 7.0 and you are unable to view the page on IE 8.0, add the following tag.

     <HEAD>
    <Meta content = “IE = 7” http – equiv = “X-UA-Compatible>

ü      Option Model for IE 7.0
ü      Developer Tools and Toolbar
ü      Switching between IE 7.0 to IE 8.0
ü      Accelerators (Advanced & Customized) on simple right click on any text.

 ***************************     Thank You   *************************************** gf42725695101

Wednesday, April 6, 2011

Automating stored procedure execution

The sp_procoption system stored procedure can be used to mark the user stored procedure for automatic execution when the SQL Server starts. Only objects in the master database owned by dbo can have the startup setting changed and this option is restricted to objects that have no parameters.
USE master
EXEC sp_procoption 'indRebuild', 'startup', 'true'

Select count(*)

Select COUNT(*) from a table usually does a full table scan to return the total row count. The time taken can be considerable for large tables. Instead it is advisable to use the other approach in most cases. You can use sysindexes system table. The ROWS column in the sysindexes table contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*):

SELECT rows FROM sysindexes
WHERE id = OBJECT_ID('table_name')
     AND indid < 2
The speed can be improved manifold. 

Tuesday, April 5, 2011

Data Mocking & Data Masking

I am currently working on an article as an anecdote to Data Mocking (what we call Mockups) and Data Masking. In the database world we, very frequently come up with these terms. I can understand that, you guys being database enthusiasts might be very much familier with these terms. However, not everyone would be like you all and the target audience for the article would be those guys.

Sunday, April 3, 2011

SQL Optimizer Parameterization


Add a note herePlan Recompilation

Add a note hereOnce the Query Optimizer has come up with a plan, which may have taken a considerable amount of work, SQL Server does its best to ensure that you can leverage all that costly work again. It does this by caching the plan it just created, and taking steps to ensure that the plan is reused as widely as possible. It does this by using parameterization options.

Add a note here Parameterization

Parameterization is a process whereby SQL Server will take the T-SQL you entered and look for ways to replace values that may be variables with a token, so that if a similar query is processed, SQL Server can identify it as being the same underlying query, apart from some string, or integer values, and make use of the already cached plan. For example, the following is a basic T-SQL query to return data from the Adventure Works 2008 database:
Add a note here
   select *
   from person.person
   where lastname = 'fluffy'
Add a note hereThe parameterization of this query would result in the string ‘fluffy’ being replaced with a parameter such that if another user executes the following query, the same plan would be used, saving on compilation time:
Add a note here
   select *
   from person.person
   where lastname = 'jonathan'
Add a note hereNote that this is just an example, and this particular query gets a trivial plan, so it isn't a candidate for parameterization.
Add a note hereThe SQL Server Books Online topic on "Forced Parameterization" contains a lot of very specific details about what can and cannot be converted to a parameter.
Add a note hereTo determine whether a query has been parameterized, you can search for it in the DMV sys.syscacheobjects (after first executing the query to ensure it is cached). If the SQL column of this DMV shows that the query has been parameterized, you will see that any literals from the query have been replaced by variables, and those variables are declared at the beginning of the batch.

Add a note hereParameterization is controlled by the SQL Server configuration options — simple or forced:

§  Add a note hereSimple Parameterization: The default operation of SQL Server is to use simple parameterization on all queries that are suitable candidates. Books Online provides numerous details about which queries are selected and how SQL Server performs parameterization. Using simple parameterization, SQL Server is able to parameterize only a relatively small set of the queries it receives.
§  Add a note hereForced Parameterization: For more control over database performance, you can specify that SQL Server use forced parameterization. The Forced parameterization option forces SQL Server to parameterize all literal values in any select, insert, update, or delete statement queries. There are some exceptions to this, and these are well documented in SQL Server Books Online. Forced parameterization is not appropriate in all environments and scenarios. It is recommended that you use it only for a very high volume of concurrent queries, and when you are seeing high CPU from a lot of compilation/recompilation. If you are not experiencing a lot of compilation/recompilation, then forced parameterization is probably not appropriate. If you use forced in the absence of these symptoms, you may end up with a degradation in performance and/or throughput, as SQL Server takes more time to parameterize a lot of queries that are not then reused.
Add a note hereForced parameterization can also be more finely controlled through the use of plan guides. You will learn about plan guides in more detail later in this chapter.

What is Normalization??

Normalization in the simplest terms:
Normalization is the process of organizing data in a database. The target is to eliminate redundant data (for example, storing the same data in more than one table) and ensure sensible data dependencies in other words, only storing related data in a table.

First Normal Form (1NF)
First normal form (1NF) is the basic step:
    * Eliminate duplicate columns from the same table (mind it not duplicate rows)
    * Create separate tables for each group of related data and identify each row with a unique column i.e. set of columns (primary key).

Second Normal Form (2NF)
Second normal form (2NF) further removes duplicate data:
    * Should meet the first normal form.
    * Remove subsets of data that apply to multiple rows of a table and place them in separate tables.
    * Create relationships between the new tables and their parents through the use of foreign keys.

Third Normal Form (3NF)
Third normal form (3NF) goes one step further:
    * Should meet the second normal form.
    * Remove columns that are not dependent upon the primary key.

Fourth Normal Form (4NF)
Finally, fourth normal form (4NF) has one additional requirement:
    * Meet the third normal form requirements.
    * There should be no multi-valued dependencies.

Significance is high upto the 3rd normal form. Thereafter the need & existence dries down a little. However we are not to forget the importance of Denormalized databases on OLAP segments. Please refer to MS SQL Server BoK for more details.

Some more benefits of normalization:
a) Optimized queries (because normalized tables produce rapid, efficient joins)
b) Faster index creation and sorting (because the tables have fewer columns)
c) Data integrity (because there is no redundant, neglected data)
d) Faster UPDATE performance (because there are fewer indexes per table)
e) Improved concurrency resolution (because table locks will affect less data)

Database pages simplified..

Database Pages
A Page is the unit by which SQL Server data is stored. SQL Server 2000 views each data file and breaks it down to 8K byte pages. There are 128 pages stored in 1MB. A page can only belong to one object. The beginning of each page has a header that contains information about the characteristics of that page. The page header takes up the first 96 bytes of each page, leaving 8060 bytes for actual data. The header contains information such as PageID, Previous Page, Next Page, amount of free space available, the object ID of the object that owns the page, etc. By checking the header, SQL Server can quickly determine the status of each page, making it easily identifiable.
To keep track of allocations, SQL Server stores allocation information in certain page types. Lets have a look at them:

1) Global Allocation Map (GAM) Pages
A GAM page reports whether an extent is free or allocated. It has a bit (1 or 0) to keep
track of extent allocations in the interval it covers. A bit set to 1 represents a free extent
and a bit set to 0 represents an extent that is in used. Each GAM page contains information up to about 64,000 extents, which is about 4GB of data. Therefore, a data file could have a GAM page for every 4GB of data. GAM pages reside on page 2 of each data file and appear again every 511,230 pages after the first GAM page.

2) Shared Global Allocation Map pages (SGAM)
SGAM pages record which extents are currently used as mixed extents and have at least
one unused page. A bit set to 1 represents a mixed extent with at least one free page and a bit set to 0 represents an extent which is either not a mixed extent or it is mixed but has no free pages. SGAM pages contain information up to about 64,000 extents, which is about 4GB of data. SGAM pages reside on page 3 of each data file and appear again every 511,230 pages after the first SGAM.
The following shows the bit setting for each extent in the GAM and SGAM pages:
Current Status of an Extent GAM bit setting SGAM bit setting
Free Extent  has GAM bit setting of 1 and Mixed extent that has free pages has SGAM setting of 1. Other bits are zero for each.

3) Index Allocation Map (IAM) Page
The IAM pages mark the extents that are allocated to a heap or index, as well as marking
the extents that are allocated to an ntext, text, and image page chain for any table that has
columns of these types. Each heap, index and table with the above data types has a chain of one or more IAM pages to keep track of extents that are allocated to it. The number of IAM pages for an object depends on whether the number of extents exceeds the range that an IAM can record, or if there are multiple data files that host the allocated extents for that object. An IAM page is needed for every data file that hosts extents for that object. IAM
pages are allocated as needed and are located randomly in the data file.

4) Page Free Space (PFS)
PFS pages indicate whether any pages in a Heap or an ntext, text, or image column are
allocated. They also report the amount of free space on each page. Every PFS page covers around 8,000 pages and the next one appears about 8,000 pages after the first page. For every page, PFS has a bitmap that tracks the following space information:
a) Empty page
b) 1-50% Empty
c) 1-20% Empty
d) 1-5% Empty
e) 0-4% Empty
Once an extent is allocated to an object, SQL Server uses the PFS pages to keep track of
free and used pages. PFS pages are the first page in the data file after the file header page.

Friday, April 1, 2011

Orphaned Distributed Transaction SPID

Explanation:
This example is of an Orphaned Distributed Transaction SPID which specifically involves MSDTS, the Microsoft Distributed Transaction Coordinator. The is what happens when a transaction involves data which resides on more than one server (such as when a database record is replicated out to two or more servers where MSDTC needs to become involved) and the following happens:
a) A server drops off the network
b) There’s a power outage at the distributor server.



The Negative SPID (-2) in SQL Server
Phenomenon: May be you have been investigating a performace issue in SQL Server or exploring the error log file for sometime. But to add to your astonishment you notice that there is a negative SPID = -2 at the top of the chain and is the cause of all deadlocks.

What next?
Definitely you would like to wait (not sure how long?) or try to kill the process.

Here is what you have when you try any of them:
Wait: No use waiting as the process will not complete/refresh untill a server restart.
Kill: Yes. Ofcourse you can try to kill the process by executing KILL -2. Here is your prize:
Msg 6101, Level 16, State 1, Line 1Process ID -2 is not a valid process ID. Choose a number between 1 and 2048

Anything else in Pandora's box?
No...?

Solution:
Try this.
SELECT req_transactionUOWFROM master..syslockinfo
WHERE req_spid = -2

This will return a 32 digit UOW number which looks like a GUID. Something like ‘BCD12078-0199-1212-B810-AB46A24F2498’
Run the KILL command with the above UOW.
KILL ‘BCD12078-0199-1212-B810-AB46A24F2498’
Now run sp_who2/sp_who. To your surprise its SPID -2 is GONE.

SQL ResultSet in CSV Format

There might be a requirement very often to dump the query results straightaway into CSV file format or in other words into a .CSV file. Here is what we need to do to achieve the same.

1) In case we are using OSQL through JCL to call the stored proc, then we need to add a new parameter (-s) as "," or ";" to indicate the character separator. E.g. SET PARM5= -s %";"%
The other parameter (-o) needs to be modified to specify (.CSV) as output file format instead of (.TXT).

2) This is not all. The stored proc needs to be modified as well to accommodate the changes. Please refer to these different approaches taken.

SELECTED APPROACH:

a) SELECT 'col_1' + ',' + 'col_2' + ',' + ' '
    UNION ALL
    SELECT CONVERT (varchar, col_1) + ','
             + CONVERT (varchar, col_2) + ',' + ' '

ALTERNATIVE APPROACH:

a) SELECT 'col_1', ',' , 'col_2'
    UNION ALL
    SELECT CONVERT (varchar, col_1), ','
                , CONVERT (varchar, col_2).................cont.

b) SELECT  CONVERT( varchar,col_1) AS [col_1,]
                ,CONVERT( varchar,col_2) AS [col_2,]........cont.

NOTE:
Conversion to varchar is required to append the character separator.

ISSUES WITH ALTERNATIVE APPROACH:
1) Alignment problem with alphanumeric columns.
2) Space issues which could not be handled through generic sql trim functions.
3) Unoptimized file size.
4) Truncation of data in some cases if varchar size is not carefully specified.
5) Last column of unusual width.

UNSOLVED ISSUES:
1) The column header being present in the final CSV output file.
2) The date field losing its formatibility once converted to varchar and exported to excel.
Unresolved Issue # 1 can be resolved by the following approaches.

The SQL output contains 2 records, the Header name in the first line and the Line (-) in the second line. This can be removed by
1. We can create DTS package to export the data into Text file and call that DTS from the JCL script. This removes the header lines.
2. These lines can be skipped directly from the JCL script itself with the following code

We just need to provide the record length like [LRECL=357]

//SORTCPY EXEC PGM=SORT
//SORTIN DD DSN=&&UCMDFILE, DISP=OLD
//SORTOF01 DD DSN=&&UCMDSORT,
// DISP= (, PASS, DELETE),
// DCB= (RECFM=FB, LRECL=357),
// SPACE= (2048, (500,500), RLSE)
//SYSOUT DD SYSOUT=*
//SYSIN DD *
SORT FIELDS=COPY
OUTFIL FILES=01, STARTREC=2



Table Variable OR Temporary Table

A few considerations:

Table variables will perform poorly with large record sets, especially when doing joins because there can be no indexes other than a primary key. Beware, though, when many users start using table variables -- large amounts of RAM are used because all temporary variables are stored and processed directly in memory. Table variables should hold no more than 2 Megs to 3 Megs of data each (depending on user load and system memory).
There are some more limitations to the usage to be kept in mind:
 
• Table variables do not maintain statistics like temporary tables can. Statistics cannot be created on table variables through automatic creation or by using the CREATE STATISTICS statement. Therefore, for complex queries on large tables, the lack of statistics may deter the optimizer to determine the best plan for a query, thus affecting the performance of that query.
• The table definition cannot be changed after the initial DECLARE statement.
• Tables variables cannot be used in a INSERT EXEC or SELECT INTO statement.
• CHECK constraints, DEFAULT values, and computed columns in the table type declaration cannot call user-defined functions.