We recently ran into some load testing issues where SQL Server connections climb exponentially, this was with both SQL Server 2008 and 2012.
Of course it must be a database issue.
This was under, load and basically SQL Server connections would climb from the average of 150-200 to 800+ in about a 5-30 second period, cpu maxing out at 100% and then all activity would stopped, with SQL Server "locked" up from locking and blocking, and it would not recover.
This was with a web farm of 10-12 clients, and an application that has historically passed all this load testing before.
The symptoms were reproducible but at random times, sometimes 20 minutes into the load, sometimes 4 hours, sometimes 12 hours.
Eventually we moved to a much bigger server, same results, but SQL Server was able to recover after a few minutes.
There was no common root cause found, what we did find is that when you run a very large web farm against SQL Server, special attention is needed. We found lack of capacity with the single sign on environment that is used by the web application, we found issues with the domain controllers. We also found that events like a full virus scan kicking off on all the web servers caused this issue.
A very painful experience, and the help of testing tools that can observe your entire environment is necessary to really help, SAN, WEB Farm, Domain Controllers, SSO and SQL, all at once. Than you can start to see that as the domain controllers began having an event 1-2 seconds before SQL, or the web farm...it's just that the problem manifested itself in SQL Server, as the web farm would slow down, causing the .net connection pool to fire up even more connections into sql and then sql just got over-run.