Category Archives: sql

Visual Studio Database Project error SQL03006: User: [xxx] has an unresolved reference to Login [xxx].

When I create a new database project wizard and import my existing database schema, it won’t “build”. I get the error:

SQL03006: User: [xxx] has an unresolved reference to Login [xxx].

The Login is actually defined in the master database of the server install. The CREATE USER statement needs to point at an existing Login otherwise it errors. The Database Project is not aware of the Login at the server level. Either you can create a Server Project to handle the Logins, or you can turn off the checking of Security objects in your Database Project. See the answer by Demetri M for more details:(below)

What I did for the step “import database objects and settings” was as follows which did ultimately lead to fix:

Right click on the DB Server project and select to do an “Import objects and settings…”.  When they ask you for the server and database, select the same server as your Database Project and then select the “master” database on that server.  After that imports, you can then link the Database project to the Database Server Project.  Finally do a clean and rebuild of the solution and the User error should be cleared.

Anyway, that is what worked for me.  I hope this helps.

@@TRANCOUNT

PRINT @@TRANCOUNT
— The BEGIN TRAN statement will increment the
— transaction count by 1.
BEGIN TRAN
PRINT @@TRANCOUNT
BEGIN TRAN
PRINT @@TRANCOUNT
— The COMMIT statement will decrement the transaction count by 1.
COMMIT
PRINT @@TRANCOUNT
COMMIT
PRINT @@TRANCOUNT
–Results
–0
–1
–2
–1
–0

How to select the first/least/max row per group in SQL

How to select the first/least/max row per group in SQL

Thu, Dec 7, 2006 in Databases

Here are some common SQL problems, all of which have related solutions: how do I find the most recent log entry for each program? How do I find the most popular item from each category? How do I find the top score for each player? In general, these types of “select the extreme from each group” queries can be solved with the same techniques. I’ll explain how to do that in this article, including the harder problem of selecting the top N entries, not just the top 1.

This topic is related to numbering rows, which I just wrote about (see my articles about MySQL-specific and generic techniques to assign a number to each row in a group). Therefore I’ll use nearly the same table and data as I used in those articles, with the addition of a price column:

+——–+————+——-+

| type | variety | price |

+——–+————+——-+

| apple | gala | 2.79 |

| apple | fuji | 0.24 |

| apple | limbertwig | 2.87 |

| orange | valencia | 3.59 |

| orange | navel | 9.36 |

| pear | bradford | 6.05 |

| pear | bartlett | 2.14 |

| cherry | bing | 2.55 |

| cherry | chelan | 6.33 |

+——–+————+——-+

Selecting the one maximum row from each group

Let’s say I want to select the most recent log entry for each program, or the most recent changes in an audit table, or something of the sort. This question comes up over and over on IRC channels and mailing lists. I’ll re-phrase the question in terms of fruits. I want to select the cheapest fruit from each type. Here’s the desired result:

+——–+———-+——-+

| type | variety | price |

+——–+———-+——-+

| apple | fuji | 0.24 |

| orange | valencia | 3.59 |

| pear | bartlett | 2.14 |

| cherry | bing | 2.55 |

+——–+———-+——-+

There are a few common solutions to this problem. All involve two steps: finding the desired value of price, and then selecting the rest of the row based on that.

One common solution is a so-called self-join. Step one is to group the fruits by type (apple, cherry etc) and choose the minimum price:

select type, min(price) as minprice

from fruits

group by type;

+——–+———-+

| type | minprice |

+——–+———-+

| apple | 0.24 |

| cherry | 2.55 |

| orange | 3.59 |

| pear | 2.14 |

+——–+———-+

Step two is to select the rest of the row by joining these results back to the same table. Since the first query is grouped, it needs to be put into a subquery so it can be joined against the non-grouped table:

select f.type, f.variety, f.price

from (

select type, min(price) as minprice

from fruits group by type

) as x inner join fruits as f on f.type = x.type and f.price = x.minprice;

+——–+———-+——-+

| type | variety | price |

+——–+———-+——-+

| apple | fuji | 0.24 |

| cherry | bing | 2.55 |

| orange | valencia | 3.59 |

| pear | bartlett | 2.14 |

+——–+———-+——-+

Another common way to do this is with a correlated subquery. This can be much less efficient, depending on how good your system’s query optimizer is. You might find it clearer, though.

select type, variety, price

from fruits

where price = (select min(price) from fruits as f where f.type = fruits.type);

+——–+———-+——-+

| type | variety | price |

+——–+———-+——-+

| apple | fuji | 0.24 |

| orange | valencia | 3.59 |

| pear | bartlett | 2.14 |

| cherry | bing | 2.55 |

+——–+———-+——-+

Both queries are logically equivalent, though they may not perform the same.

Select the top N rows from each group

This is a slightly harder problem to solve. Finding a single row from each group is easy with SQL’s aggregate functions (MIN(), MAX(), and so on). Finding the first several from each group is not possible with that method because aggregate functions only return a single value. Still, it’s possible to do.

Let’s say I want to select the two cheapest fruits from each type. Here’s a first try:

select type, variety, price

from fruits

where price = (select min(price) from fruits as f where f.type = fruits.type)

or price = (select min(price) from fruits as f where f.type = fruits.type

and price > (select min(price) from fruits as f2 where f2.type = fruits.type));

+——–+———-+——-+

| type | variety | price |

+——–+———-+——-+

| apple | gala | 2.79 |

| apple | fuji | 0.24 |

| orange | valencia | 3.59 |

| orange | navel | 9.36 |

| pear | bradford | 6.05 |

| pear | bartlett | 2.14 |

| cherry | bing | 2.55 |

| cherry | chelan | 6.33 |

+——–+———-+——-+

Yuck! That can be written as a self-join, but it’s just as bad (I leave it as an exercise for the reader). This gets worse as you go to higher numbers (top 3, top 4…). There are other ways to phrase the statement, but they all boil down to the same thing, and they’re all pretty unwieldy and inefficient.

There’s a better way: select the variety from each type where the variety is no more than the second-cheapest of that type.

select type, variety, price

from fruits

where (

select count(*) from fruits as f

where f.type = fruits.type and f.price <= fruits.price

) <= 2;

This is elegant, and lets you vary N without rewriting your query (a very good thing!), but it’s functionally the same as the previous query. Both are essentially a quadratic algorithm relative to the number of varieties in each type. And again, some query optimizers may not do well with this and make it quadratic with respect to the number of rows in the table overall (especially if no useful index is defined), and the server might get clobbered. Are there better ways? Can it be done with one pass through the data, instead of the many passes required by a correlated subquery? You know it can, or I wouldn’t be writing this, now would I?

SQL GROUP BY techniques

One aspect of the versatile SELECT statement that seems to confuse many people is the GROUP BY clause.  It is very important to group your rows in the proper place.  Always push GROUP BY aggregations as far into your nested SELECT statements as possible – if you have a situation in which you are grouping by long lists of columns that are not part of primary keys, you are probably have not structured your query correctly.

Here’s a classic example that returns total sales per customer, in addition to returning the customer’s name and address:

SELECT
C.CustomerID, C.CustomerName,
C.CustomerType, C.Address1, C.City,
C.State, SUM(S.Sales) as TotalSales
FROM
Customers C
INNER JOIN Sales S
ON C.CustomerID = S.CustomerID
GROUP BY
C.CustomerID, C.CustomerName,
C.CustomerType, C.Address1, C.City, C.State

I can’t say how many times I see SELECT’s written this way and it is simply wrong. You should only be grouping on CustomerID, and not on all those other columns.  Push the grouping down a level, into a derived table:

SELECT
C.CustomerID, C.CustomerName,
C.CustomerType, C.Address1, C.City,
C.State, S.TotalSales
FROM
Customers C
INNER JOIN
(SELECT
CustomerID, SUM(Sales) as TotalSales
FROM
Sales
GROUP BY
CustomerID) S
ON
C.CustomerID = S.CustomerID

Hopefully, you will agree that it is a much cleaner SELECT statement now, it is more efficient and it logically makes more sense when you look at it.

One more common mistake is that people just mimic the expressions in their SELECT list in the GROUP BY clause, without thinking logically about what grouping is really necessary.  For example:

SELECT LastName + ‘, ‘ + FirstName, … etc …
FROM Names
GROUP BY LastName + ‘, ‘ + FirstName

Again, that is logically wrong and also less efficient.  You should not be grouping by the expression itself; you should be grouping by what is needed to evaluate that expression. The correct grouping is:

GROUP BY LastName, FirstName

Too many people just keep stuffing column names and expressions into the GROUP BY clause until the errors go away without stepping back and thinking logically about how grouping works.  Take a minute to really consider what you need to return and how it should be grouped, and try using derived tables more frequently when writing aggregate queries to help keep them structured and efficient.

SQL RANK AND DENSE RANK

Create table ExamResult(name varchar(50),Subject varchar(20),Marks int)

insert into ExamResult values('Adam','Maths',70)
insert into ExamResult values ('Adam','Science',80)
insert into ExamResult values ('Adam','Social',60)

insert into ExamResult values('Rak','Maths',60)
insert into ExamResult values ('Rak','Science',50)
insert into ExamResult values ('Rak','Social',70)

insert into ExamResult values('Sam','Maths',90)
insert into ExamResult values ('Sam','Science',90)
insert into ExamResult values ('Sam','Social',80)

RANK()

Returns the rank of each row in the result set of partitioned column.

select Name,Subject,Marks,
RANK() over(partition by name order by Marks desc)Rank
From ExamResult
order by name,subject

Rank() Example

DENSE_RANK()

This is same as RANK() function. Only difference is returns rank without gaps.

select  Name,Subject,Marks,
DENSE_RANK() over(partition by name order by Marks desc)Rank
From ExamResult
order by name

DENSE_RANK() Example

In RANK() result set screenshot, you can notice that there is gap in Rank(2) for the name Sam and same gap is removed in DENSE_RANK().

SQL QUERIES

to get all participants that have more than 1 record in the table where at lease one of those records has IsCurrent = 0 and IsActive = 1

SELECT  ParticipantId 
FROM    Contact
WHERE   EXISTS
        (   SELECT  1
            FROM    Contact c2
            WHERE   c2.ParticipantID = c.ParticipantId
            AND     ContactTypeId = 1
            GROUP BY ParticipantID
            HAVING COUNT(*) > 1
            AND COUNT(CASE WHEN IsCurrent = 0 AND IsActive = 1 THEN 1 END) >= 1
        );

get records from north wind database(products table) grouped by categoryid and count >11 and min and max per group 

WITH CTE_Pages2 AS
(
  SELECT ROW_NUMBER() OVER (PARTITION BY CategoryID ORDER BY UnitPrice desc) AS ‘Row Number’, ProductName, UnitPrice,CategoryID
  FROM Products
)

SELECT
  A.MINN,B.maxx,A.CategoryID
  from (select categoryid,MIN(unitprice) as minn from CTE_Pages2 group by categoryid HAVING COUNT(categoryid)>11) A
  join
  (select categoryid,max(unitprice) as maxx from CTE_Pages2 group by categoryid HAVING COUNT(categoryid)>11) B
  ON A.CATEGORYID=B.CategoryID

using intersect

SELECT  ParticipantId 
    FROM Contact
   Group by ParticipantId 
  Having Count(*) > 1 
Intersect
  SELECT  ParticipantId 
    FROM Contact
   WHERE IsCurrent = 0 
     AND IsActive = 1 
     AND ContactTypeId = 1

The SQL INTERSECT clause/operator is used to combine two SELECT

statements, but returns rows only from the first SELECT

statement that are identical to a row in the second SELECT

statement. This means INTERSECT returns only common rows returned

by the two SELECT statements.

Do link tables need a meaningless primary key field?

I have found that many people set up the primary keys for join tables in a manner that I find undesirable. This leaves open the possibility of some sneaky bugs in their programs.

I’ll start by covering what I mean by a join table (as opposed to what I will call a data table).

Most basic data is stored in data tables. For example a “users” table is a data table, as is a “groups” table. Each row in a “users” table represents one user, as each row in a “groups” table would represent a group.

It could be, however, that each user could be in more than one group and each group could have more than one user as a member. In order to keep the database normalized, another table should represent these relationships (“users2groups”).

I wouldn’t classify this as a data table because each row in this table represents a relationship between two things, rather than a thing itself. So, the “users2groups” table is a join table.

I have frequently seen a structure like this for join tables:

users2groups:
user2group_id (primary key)
user_id
group_id

The thinking here, I imagine, is that we know that each table needs a primary key so we add one to the table. Herein lies the problem. The value of “user2group_id” represents a relationship, not data.

You won’t ever look up a row using the “user2group_id” because the row has no value in isolation. Instead, you will look up rows by the user_id or the group_id.

The real problem, however, is that this allows duplication. See this sample data:

image

Using a user2group_id primary key allows this duplication, which has no meaning.

At first this may seem like the same problem as two rows of matching data in a data table, but it isn’t. First of all, a duplicate user row means something – that two users have identical data. More than that, it is relatively easy to detect.

A duplicate row in a join table has no meaning, but it can cause problems that are difficult to debug.

The solution to this problem is simple, drop the extraneous “user2group_id” and make your primary key a compound primary key consisting of both “user_id” and “group_id”.

users2groups:
user_id (primary key)
group_id (primary key)

This will ensure that you cannot have one relationship represented by more than one row.

You should even be able to make this change without causing any problems for your existing application. Unless, of course, you already have a relationship duplicated in the table. 

Next time you make a join table, I suggest using a compound primary key. The structure is simpler and it could help to eliminate some bugs.

Entity Framework Gets Code First Model Generation Through Reverse Engineering

  • Microsoft has released a minor version update of its popular Object Relational Mapping (ORM) tool with numerous bug fixes and new features, including the ability to reverse engineer an existing database to create a Code First model.

    Entity Framework (EF) 6.1 is the first “.x” update since Microsoft open sourced the tool moving to version 6 and moved it to CodePlex. It features tooling consolidation that “extends the ADO.NET Entity Data Model wizard to support creating Code First models, including reverse engineering from an existing database,” Microsoft’s Rowan Miller explained in an ADO.NET Blog post.

    Miller said this new functionality was available in beta quality in the EF Power Tools package, used by the engineering team to preview features under consideration for inclusion in a main release.

    Developers can now right-click on a C# project to get a context menu item to “Reverse Engineer Code First,” according to Data Developer Center documentation for the EF Power Tools. This command generates a derived DbContext class, “plain-old CLR object” (POCO) classes and Code First mapping classes from an existing database.

    Another context menu item is available to “Customize Reverse Engineer Templates,” which lets developers customize code generation via reverse engineer T4 templates added to projects.

    According to project meeting notes that Microsoft developers now put online, the Code First models that can now be created with the wizard will lack certain features, such as the ability to split tables and entities, and support for complex types and enums, among others.

    Microsoft has posted a Code First to an Existing Database video and walk-through where you can learn more about this new feature.

    EF6.1 also includes a new IndexAttribute functionality that lets developers specifically identify database table indexes with an [Index] attribute attached to properties in Code First models.

    Other enhancements to Entity Framework in version 6.1 listed by Miller include aCommitFailureHandlerpublic mapping API, performance improvements and many others.

    Microsoft engineers such as Diego Vega and Pawel Kadluczka have been busy this week helping developers who reported problems with the new version update in the comments section of Miller’s blog.

    For more support, Microsoft recommends developers post questions on the StackOverflow site, where six questions had been posted as of today.

    For future releases, the EF team’s roadmap indicates it’s considering “EF Everywhere,” described as “a lighter-weight version of EF that can be used across a wide variety of platforms (including Windows Store and Windows Phone).”

    original article:http://visualstudiomagazine.com/blogs/data-driver/2014/03/entity-frameworkupdate.aspx

SQLite Toolbox 4.0 – Visual Guide of Features

SQLite Toolbox 4.0 – Visual Guide of Features

After more than 300.000 downloads, version 4.0 of my SQL Server Compact & SQLite Toolbox extension for Visual Studio  “14”, 2013, 2012 and 2010 is now available for download and can be install from the Tools/Extensions and Updates dialog in Visual Studio. This blog post is a visual guide to the new features included in this release, many suggested by users of the tool via the CodePlex issue tracker. The focus for this release is support for SQLite databases, and this is also the focus of this blog post.

Overview

This release includes extensive support for SQLite databases, but please bear in mind, that this is v1 in terms of support for SQLite, so there is most likely both room for additional SQLite related features and improvements to the current. I plan to release a “bug fix” update before or around the VS “14” release. Getting used to SQLite has also been a steep learning curve for me!

Root level SQLite features

The about dialog has been enhanced with SQLite related information:

image

The version of the SQLite ADO.NET provider included with the Toolbox is displayed, and the presence of the DbProvider indicates that SQLite is installed in GAC (not required by the Toolbox)

image

“Add SQLite Connection” will allow you to connect to an existing SQLite database file, or create a new blank one.

image

“Script SQL Server Database Schema and Data for SQLite” will create a SQLite compatible script (.sql file) from a SQL Server database, allowing you to migrate a database from SQL Server to SQLite. For how to use the generated script, see my blog post here.

Database level SQLite features

image

When you right click (or press Shift+F10) at the database level, you will get the options above. Let us go through them one by one (notice that all these features are also available for SQL Server Compact database files!)

“Open SQL editor” – will open a SQL editor, where you can execute ad hoc SQL statements, and this editor is also used for any scripts created at the table level 8see below). The editor has a toolbar with various buttons:

image

Open: Open a saved script (.sql) file
Save As: Save the text in the editor as a SQL file
Execute: Run the commands in the editor, and display results below – results can be displayed as either text or in a grid (grid is slower) – set via Options in the Toolbox,
Estimated plan: Will run EXPLAIN QUERY PLAN for the statements
Search: Search for text in the editor window

The editor status bar displays: Query duration, number of rows returned, and SQLite engine version in use.

Build Table” gives you a UI to generate CREATE TABLE statements:

image

“Script Database” will generate various .sql files that you can run using sqlite3.exe.

“Create Database Graph” will generate a DGML interactive graph of your tables and their relations and columns:

image

“Create Database Documentation” will generate a html file with documentation of all tables in the database:

image

“Add sqlite-net model.cs to current Project” will code generate a model.cs file with classes for each table in the database, for use with the sqlite-net Nuget package. I will blog in detail about this later, think of it a basic productivity “scaffolding” in this release.

“Database information” generates a script with basic database information in the SQL editor.

”Copy database file” will allow you to paste the file from the file system into your project, for example if you want to include a database file as content with your app.

”Remove connection” will remove the connection from the Toolbox (will not affect the underlying file).

 

Table level SQLite features

image

(Notice that all these features are also available for SQL Server Compact database files!)

“Edit Top x Rows” will open the table in a grid, and allow you to edit and add data to the table, provided it has a primary key.

image

In addition to standard Navigation, Add, Delete and Save buttons, the bottom toolbar also contains a Quick Search and free text SQL feature. You can change the limit of rows via Options.

“View Data as Report” will open a Microsoft Report Viewer with the table data. In addition to view and print the data, you can also export as PDF, Excel and Word:

image

To use this feature, you may have to install the Report Viewer, which you can download from here.

“Script as …” will generate a DML (data manipulation language) and DDL (data definition language) script in the SQL editor for the selected table. In addition, Script as Data (INSERTs) will script all data in the table as INSERT statements in the SQL editor.

image

“Import Data from CSV” will import a CSV file, that has heading that matches the column names in the current table, and generate INSERT statements.

Rename” will (unsurprisingly) rename the current table.

Other fixes and improvements

Support for VS “14”
Improved saving of connections with “complex” passwords
Improved handling of missing MS ReportViewer dll files

SQL Server Compact/SQLite Toolbox

original:http://erikej.blogspot.dk/2014/08/sqlite-toolbox-40-visual-guide-of.html