Hello All, We are going to start new batch from next week. message/call or mail us for more details.

2 September 2014

SQL Server- Replication Administration Interview Questions and Answers - DotNet Brother

Does activity need to be stopped on a database when it is published?

No. Activity can continue on a database while a publication is being created. Be aware that producing a snapshot can be resource-intensive, so it is best to generate snapshots during periods of lower activity on the database (by default a snapshot is generated when you complete the New Publication Wizard).

Are tables locked during snapshot generation?

The length of time that the locks are taken depends on the type of replication used:
  • For merge publications, the Snapshot Agent does not take any locks.
  • For transactional publications, by default the Snapshot Agent takes locks only during the initial phase of snapshot generation.
  • For snapshot publications the Snapshot Agent takes locks during the entire snapshot generation process.
Note:- Because locks prevent other users from updating the tables, the Snapshot Agent should be scheduled to execute during periods of lower activity on the database, especially for snapshot publications.

When is a subscription available; when can the subscription database be used?


A subscription is available after the snapshot has been applied to the subscription database. Even though the subscription database is accessible prior to this, the database should not be used until after the snapshot has been applied. Use Replication Monitor to check the status of snapshot generation and application:

·         The snapshot is generated by the Snapshot Agent. View the status of snapshot generation on the Agents tab for a publication in Replication Monitor.
·         The snapshot is applied by the Distribution Agent or Merge Agent. View the status of snapshot application in the Distribution Agent or Merge Agent page of Replication Monitor.

What happens if the Snapshot Agent has not completed when the Distribution or Merge Agent starts?


It will not cause an error if the Distribution Agent or Merge Agent runs at the same time as the Snapshot Agent. However, you must be aware of the following:

·         If the Distribution Agent or Merge Agent is configured to run continuously, the agent applies the snapshot automatically after the Snapshot Agent completes.

·         If the Distribution Agent or Merge Agent is configured to run on a schedule or on-demand, and there is no snapshot available when the agent runs, the agent will shut down with a message stating that a snapshot is not yet available. You must run the agent again to apply the snapshot after the Snapshot Agent has completed.

Should I script my replication configuration?

Yes. Scripting the replication configuration is a key part of any disaster recovery plan for
 a replication topology.

What recovery model is required on a replicated database?

Replication functions properly using any of the recovery models: simple, bulk-logged, or
full. Merge replication tracks change by storing information in metadata tables.
Transactional replication tracks changes by marking the transaction log, but this marking
 process is not affected by the recovery model.

Why does replication add a column to replicated tables; will it be removed if the table isn't published?

To track changes, merge replication and transactional replication with queued updating
 subscriptions must be able to uniquely identify every row in every published table. To
 accomplish this:

·         Merge replication adds the column rowguid to every table, unless the table already has a column of data type uniqueidentifier with the ROWGUIDCOL property set (in which case this column is used). If the table is dropped from the publication, the rowguid column is removed; if an existing column was used for tracking, the column is not removed.
·         If a transactional publication supports queued updating subscriptions, replication adds the column msrepl_tran_version to every table. If the table is dropped from the publication, the msrepl_tran_version column is not removed.

·         A filter must not include the rowguidcol used by replication to identify rows. By default this is the column added at the time you set up merge replication and is named rowguid.

How do I manage constraints on published tables?

There are a number of issues to consider regarding constraints on published tables:

 Transactional replication requires a primary key constraint on each published table. Merge replication does not require a primary key, but if one is present, it must be replicated. Snapshot replication does not require a primary key. By default, primary key constraints, indexes, and check constraints are replicated to Subscribers.

The NOT FOR REPLICATION option is specified by default for foreign key constraints and check constraints; the constraints are enforced for user operations but not agent operations.

How do I manage identity columns?

Replication provides automatic identity range management for replication topologies that include updates at the Subscriber.

Can the same objects be published in different publications?

Yes, but with some restrictions.

Can multiple publications use the same distribution database?

Yes. There are no restrictions on the number or types of publications that can use the same distribution database. All publications from a given Publisher must use the same Distributor and distribution database.

If you have multiple publications, you can configure multiple distribution databases at the Distributor to ensure that the data flowing through each distribution database is from a single publication. Use the Distributor Properties dialog box or sp_adddistributiondb  to add a distribution database.

Does replication encrypt the data?

No. Replication does not encrypt data that is stored in the database or transferred over the network.

How do I replicate data over the Internet?

Replicate data over the Internet using:

  •  A Virtual Private Network (VPN).
  • The Web synchronization option for merge replication.

Note:- All types of Microsoft SQL Server replication can replicate data over a VPN, but you should consider Web synchronization if you are using merge replication.

Does replication resume if a connection is dropped

Yes. Replication processing resumes at the point at which it left off if a connection is dropped. If you are using merge replication over an unreliable network, consider using logical records, which ensures related changes are processed as a unit.

Does replication work over low bandwidth connections? Does it use compression?

Yes, replication does work over low bandwidth connections. For connections over TCP/IP, it uses the compression provided by the protocol but does not provide additional compression. For Web synchronization connections over HTTPS, it uses the compression provided by the protocol and also additional compression of the XML files used to replicate changes.

Are logins and passwords replicated?

No. You could create a DTS package to transfer logins and passwords from a Publisher to one or more Subscribers.

What are schemas and how are they replicated?

Beginning with Microsoft SQL Server 2005, schema has two meanings:
·         The definition of an object, such as a CREATE TABLE statement. By default, replication copies the definitions of all replicated objects to the Subscriber.
·         The namespace within which an object is created: <Database>.<Schema>.<Object>. Schemas are defined using the CREATE SCHEMA statement.
·         Replication has the following default behavior in the New Publication Wizard with respect to schemas and object ownership:
·         For articles in merge publications with a compatibility level of 90 or higher, snapshot publications, and transactional publications: by default, the object owner at the Subscriber is the same as the owner of the corresponding object at the Publisher. If the schemas that own objects do not exist at the Subscriber, they are created automatically.
·         For articles in merge publications with a compatibility level lower than 90: by default, the owner is left blank and is specified as dbo during the creation of the object on the Subscriber.
·         For articles in Oracle publications: by default, the owner is specified as dbo.
·         For articles in publications that use character mode snapshots (which are used for non-SQL Server Subscribers and SQL Server Compact Subscribers): by default, the owner is left blank. The owner defaults to the owner associated with the account used by the Distribution Agent or Merge Agent to connect to the Subscriber.
The object owner can be changed through the Article Properties - <Article> dialog box and through the following stored procedures: sp_addarticlesp_addmergearticle,  sp_changearticle, and sp_changemergearticle.

How can grants on the subscription database be configured to match grants on the publication database?

By default, replication does not execute GRANT statements on the subscription database. If you want the permissions on the subscription database to match those on the publication database, use one of the following methods:
·         Execute GRANT statements at the subscription database directly.
·         Use a post-snapshot script to execute the statements.
·         Use the stored procedure sp_addscriptexec to execute the statements.

What happens to permissions granted in a subscription database if a subscription is reinitialized?

By default, objects at the Subscriber are dropped and recreated when a subscription is reinitialized, which causes all granted permissions for those objects to be dropped. There are two ways to handle this:
·         Reapply the grants after the reinitialization using the techniques described in the previous section.
·         Specify that objects should not be dropped when the subscription is reinitialized. Prior to reinitialization, either:
o    Execute sp_changearticle or sp_changemergearticle. Specify a value of 'pre_creation_cmd' (sp_changearticle) or 'pre_creation_command' (sp_changemergearticle) for the parameter @property and a value of 'none', 'delete' or 'truncate' for the parameter @value.
o    In the Article Properties - <Article> dialog box in the Destination Object section, select a value of Keep existing object unchanged, Delete data. If article has a row filter, delete only data that matches the filter. or Truncate all data in the existing object for the option Action if name is in use.


Why can't I run TRUNCATE TABLE on a published table?

TRUNCATE TABLE is a non-logged operation that does not fire triggers. It is not permitted because replication cannot track the changes caused by the operation: transactional replication tracks changes through the transaction log; merge replication tracks changes through triggers on published tables.

What is the effect of running a bulk insert command on a replicated database?

For transactional replication, bulk inserts are tracked and replicated like other inserts. For merge replication, you must ensure that change tracking metadata is updated properly.

Does replication affect the size of the transaction log?

Merge replication and snapshot replication do not affect transaction log size, but transactional replication can. If a database includes one or more transactional publications, the log is not truncated until all transactions relevant to the publications have been delivered to the distribution database. If the transaction log is growing too large, and the Log Reader Agent is running on a scheduled basis, consider shortening the interval between runs. Or, set it to run in continuous mode. If it is set to run in continuous mode (the default), ensure that it is running.

Additionally, if you have set the option 'sync with backup' on the publication database or distribution database, the transaction log is not truncated until all transactions have been backed up. If the transaction log is growing too large, and you have this option set, consider shortening the interval between transaction log backups.

How do I rebuild indexes or tables in replicated databases?

There are a variety of mechanisms for rebuilding indexes. They can all be used with no special considerations for replication, with the following exception: primary keys are required on tables in transactional publications, so you cannot drop and recreate primary keys on these tables.

How do I add or change indexes on publication and subscription databases?

Indexes can be added at the Publisher or Subscribers with no special considerations for replication (be aware that indexes can affect performance). CREATE INDEX and ALTER INDEX are not replicated, so if you add or change an index at, for example, the Publisher, you must make the same addition or change at the Subscriber if you want it reflected there.

How do I move or rename files for databases involved in replication?

In versions of SQL Server prior to SQL Server 2005, moving or renaming database files required detaching and reattaching the database. Because a replicated database cannot be detached, replication had to be removed from these databases first. Beginning with SQL Server 2005, you can move or rename files without detaching and re-attaching the database, with no effect on replication.

How do I drop a table that is being replicated?

First drop the article from the publication using sp_droparticle, sp_dropmergearticle, or the Publication Properties - <Publication> dialog box, and then drop it from the database using DROP <Object>. You cannot drop articles from snapshot or transactional publications after subscriptions have been added; you must drop the subscriptions first.

How do I add or drop columns on a published table?

SQL Server supports a wide variety of schema changes on published objects, including adding and dropping columns. For example, execute ALTER TABLE … DROP COLUMN at the Publisher, and the statement is replicated to Subscribers and then executed to drop the column. Subscribers running versions of SQL Server prior to SQL Server 2005 support adding and dropping columns through the stored procedures sp_repladdcolumn and sp_repldropcolumn.


How do I add a table to an existing publication?

It is not necessary to stop activity on the publication or subscription databases in order to add a table (or another object). Add a table to a publication through thePublication Properties - <Publication> dialog box or the stored procedures sp_addarticle and sp_addmergearticle.

How do I remove a table from a publication?

Remove a table from the publication using sp_droparticle, sp_dropmergearticle, or the Publication Properties - <Publication> dialog box. You cannot drop articles from snapshot or transactional publications after subscriptions have been added; you must drop the subscriptions first. 

Overview of DBMS Three Level Architecture - DotNet Brother

An early proposal for a standard terminology and general architecture database a system was produced in 1971 by the DBTG (Data Base Task Group) appointed by the Conference on data Systems and Languages. The DBTG recognized the need for a two level approach with a system view called the schema and user view called subschema. The American National Standard Institute terminology and architecture in 1975.ANSI-SPARC recognized the need for a three level approach with a system catalog.
There are following three levels or layers of DBMS architecture:
1. Extenal Level
2. Conceptual Level
3. Internal Level
1. External Level: - External Level is described by a schema i.e. it consists of definition of logical records and relationship in the external view. It also contains the method of deriving the objects in the external view from the objects in the conceptual view.
2. Conceptual Level: - Conceptual Level represents the entire database. Conceptual schema describes the records and relationship included in the Conceptual view. It also contains the method of deriving the objects in the conceptual view from the objects in the internal view.
3. Internal Level: - Internal level indicates hoe the data will be stored and described the data structures and access method to be used by the database. It contains the definition of stored record and method of representing the data fields and access aid used.
A mapping between external and conceptual views gives the correspondence among the records and relation ship of the conceptual and external view. The external view is the abstraction of conceptual view which in turns is the abstraction of internal view. It describes the contents of the database as perceived by the user or application program of that view. A mapping between conceptual records from the physical database.

14 August 2014

SQL Server Integration Services - SSIS DotNet Brother

SQL Server Integration Services
In this blog describe the concept of SQL Server Integrated Services (SSIS). SQL Server Integration Services (SSIS) is a tool that we use to perform ETL (extract, transform, and load) operations of data. Here, we explain the features of SQL server integration services, advantages and disadvantages of SSIS.
SSIS stands for SQL Server Integration Services (SSIS). SQL Server Integration Services (SSIS) is a tool that we use to perform ETL (extract, transform, and load) operations of data.  Also SSIS is used to perform operations like loading the data based on the need, performing different transformations on the data like doing calculations (Sum, Average, etc.) and to define a workflow of how the process should flow and perform some tasks on the day to day activity.
Microsoft Integration Services is a platform for building enterprise-level data integration and data transformations solutions. You use Integration Services to solve complex business problems by copying or downloading files, updating data warehouses, mining data, and so on.
NoteSQL server integration services is not for SQL Server express editions. It required enterprise edition.
Features of SSIS
The SSIS Import/Export Wizard lets the user create packages that move data from a single data source to a destination with no transformations. The Wizard can quickly move data from a variety of source types to a variety of destination types, including text files and other SQL Server instances.
Developers tasked with creating or maintaining SSIS packages use a visual development tool based on Microsoft Visual Studio called the SQL Server Business Intelligence Development Studio (BIDS). It allows users to edit SSIS packages using a drag-and-drop user interface. A scripting environment in which to write programming code is also available in the tool. A package holds a variety of elements that define a workflow. Upon package execution, the tool provides color-coded, real-time monitoring.


A connection includes the information necessary to connect to a particular data source. Tasks can reference the connection by its name, allowing the details of the connection to be changed or configured at run time.


A task is an atomic work unit that performs some action. There are a couple of dozen tasks that ship in the box, ranging from the file system task (which can copy or move files) to the data transformation task. The data transformation task actually copies data; it implements the ETL features of the product

Precedence constraints

Tasks are linked by precedence constraints. The precedence constraint preceding a particular task must be met before that task executes. The run time supports executing tasks in parallel if their precedence constraints so allow. Constraints may otherwise allow different paths of execution depending on the success or failure of other tasks. Together with the tasks, precedence constraints comprise the workflow of the package.

Event handlers

A workflow can be designed for a number of events in the different scopes where they might occur. In this way, tasks may be executed in response to happenings within the package —such as cleaning up after errors.


Tasks may reference variables to store results, make decisions, or affect their configuration.

Parameters (SQL Server 2012 Integration Services)

Parameters allow you to assign values to properties within packages at the time of package execution. You can project parameters and package parameters. In general, if you are deploying a package using the package deployment model, you should use configurations instead of parameters.
Data Transformation Services (also known as DTS) was the predecessor to the SSIS system. It is a set of objects using an ETS tool (which means a tool that extracts, transforms, and loads information into a database for warehousing) in order to extract, transform, and load this information to and/or from a database.
DTS was an original component of the Microsoft SQL Server 2000, and at its inception, was always used with the SQL Server databases. Even though it was an integral part of the server, DTS was also easily used independently from the Microsoft server, in conjunction with other databases. It is capable of transforming and loading data from heterogeneous sources, using OLE DB, ODBC, or files specified as text only, into any database that supports them.

Why use SSIS instead of DTS?

It is likely that Microsoft will discontinue the support for DTS/ SQL SERVER 
2000 later or sooner, so it better to migrate your DTS.

 Separation of Data Flow Engine from the Control Flow Engine or SSIS
 Runtime Engine and hence improves the performance by a significant

The DTS uses ELTL strategy (i.e. they have first to load data from files to
 data base before making transformations) while SSIS uses ETL strategy.

Package Configuration. Using expressions and configurations it is much
 easier to make packages dynamic.

Advantages of SSIS

Easier to maintain

Highly integrated with SQL Server and visual studio

Package Configuration. Much better functions for dynamically settings or retrieving configuration settings for packages as per other ETL tools.

Service Oriented Architecture. Obviously being in Visual Studio and having access to .Net code makes it compatible with web services.

You can use the SQL Server Destination instead of OLE DB which allows you to load data into SQL faster.

You remove network as a bottleneck for insertion of data by SSIS into SQL

Disadvantages of SSIS

If you have a number of SSIS packages that need to run in parallel and/or need to use a lot of memory (especially ones that have a lot of complex transforms), you’d have memory allocation conflicts between SQL and SSIS.  SQL will typically win this battle which means SSIS pages to file.  Once SSIS pages to file, you will have suboptimal processing by SSIS – i.e. don’t do it!

The same can be said for CPU – you need to ensure there are enough processors allocated for SSIS and SQL separately.  If the two compete, SQL will typically win which will mean SSIS will run much more slowly.

OOPS in javascript - DotNet Brother

Javascript is Prototype-based programming and a style of object-oriented programming in which classes are not present. It can support oops because it support inheritance through prototyping as well as properties and methods Object oriented programming in JS know as Prototype-based programming.

a.       Creating Class in javascript
Following syntax is used for declaring class in JS
function Emp() {
            alert('Emp instantiated');

Here Emp can be act as Class in JS
Body of Emp act as constructor as called as soon we create object of class.
  1. Creating Objects of Emp Class
Using following syntax we can creating object of Emp class
var Emp1 = new Emp();
As soon we create object of Emp class it constructor we be called
Here above function Emp would be treated as class in JS
Object Oriented Javascript

c.       Running the code

Object Oriented Javascript

Alert will be shown on page load of web form.

d.      Properties of class
We can define properties of class in following way

function Emp(firstname) {
               alert('Emp instantiated');
               this.firstname = firstname;

i)                    Passing value to properties
                                 var Emp1 = new Emp("Devesh is Emp of GENPACT INDIA");

ii)                   Snap for defining properties
Object Oriented Javascript

Object Oriented Javascript

iii)                Complete code
Object Oriented Javascript

iv)                Running code

Object Oriented Javascript
When we run the code we got alert of the string which we are passing to class of Emp.
  var Emp1 = new Emp("Devesh is Emp of GENPACT INDIA");