As database developers grew increasingly tired of having no way to successfully manage change, they began to demand solutions that would help enhance the modification and management of databases from end-to-end. What they needed was a tool that would follow traditional SCM concepts by enforcing formal, well-disciplined database change processes and workflows.
A “next generation” of database change management solutions is needed to bring the complete array of software change management functionality to the database landscape. Unlike the first generation of tools, the next generation of database change management must include more comprehensive, SCM-like capabilities. Instead of the error-prone, manual multi-step processes being used to make database modifications, which often resulted in a loss of changes, developers need a way to effectively make alterations directly to the database structure.
Innovative, next-generation database change management solutions are needed to usher database change management into the 21st century. One solution providing all of these capabilities already exists: dbMaestro TeamWork™.
To learn more about TeamWork, and how it can help take you into the future of database change management, download a white paper that expands upon the ideas outlined in this article...
And if you are interested in more details, visit the dbMaestro Web site to learn about the product and the company.
Yaniv Yehuda is CEO of dbMaestro Ltd., a sister company of Extreme Technologies. Established in 1997, Extreme Technologies is an Israeli privately owned company that specializes in the IT Services and Solutions market.
About the Author:
Feature Article: August 2010
Check out Craig S. Mullins’ blog on data + database technology. more>
Yes, data is still important, even in the age of OO and agile development. more>
For decades, companies have been benefiting from proven software change management (SCM) principles. Processes like file locking, check in/check out, compare and merge, and rollbacks have helped them deliver modifications to applications in a more productive, yet more controlled fashion.
But, traditional SCM concepts don’t necessarily fit in the database world. Databases are constructed differently than traditional software code (such as Java or .NET), and therefore, traditional SCM tools and procedures cannot be applied. As a result, there has been a major disconnect between SCM and database development activities.
SCM is controlled via a repository supporting functions against files (program source code). It typically offers capabilities for file locking, check out/check in, baselining, support for rolling back changes, comparing and merging, build / deployment, and so on.
But unlike a software application, a database is not a collection of files. To the file system, for example, it is often visible as a single huge file. Another example is the concept of the SCM 'vault' – when you check in traditional code changes, you update a protected copy of the code and track revisions, while doing the actual development of your code on a local copy of your project.
But a database is a central resource that everyone accesses and changes at the same time. You rarely make copies of it for each developer (in some situations this is actually done, but then you have to start dealing with constant synchronization to ensure consistency). You also have to deal with different levels of revision management. Some changes, such as procedures, are code-like, and some are not. Those are changes to the actual content of a table that would influence the application. Additionally, there is no “debug” environment.
In other words, database code is a completely different “animal” than traditional code.
Then, there’s the issue of moving changes from development to QA, integration, and production. With software code, this involves little more than simply copying and registering files. But, database objects can’t just be moved; it’s a bit more complex than that. You can’t just put a changed table from development into production, because you would lose all of your production content.
Unlike SCM solutions, which keep detailed logs of all changes, database developers often needed to create hard-copy lists depicting database alterations. Believe it or not, we’ve actually seen a developer who kept track of the modifications he made on post-it notes scattered across his desk and monitor.
A common work around for database change is to export the database definition language (DDL) for defining data structures, and save them in a traditional SCM tool as files. But this is not a closed-loop process. No one can make sure developers actually remember to check in every change they introduce. And what happens if someone just updates the database and doesn’t update the SCM? How can content changes be handled? A changed value of a parameter (in a lookup, or metadata table, etc.) can change the behavior of your application, and clearly has to be managed. But how?
Then, code has to be manually written to enable the alteration of database structures, without data loss.
Clearly, automation was much needed. So, next came the first generation of database change management tools. These were basically automated “compare and sync”-type tools that allowed different environments to be measured against each other.
Reports were generated to highlight the variations, and code could then be automatically generated to compensate and move changes from one environment to another. Some of the more advanced products were even able to analyze content and propagate the appropriate modifications.
Deployment became more stable, and the number of human mistakes was dramatically reduced. So, database developers were happy.... for awhile, anyway.
But, many questions and issues remain unanswered. What if the test environment had undergone changes? Then the reference environment could be out of sync, and changes could get lost again. What was happening during the development cycle? There was no information about how the end result was achieved. Can anyone with access privileges update the database? What changes were being made, and by whom? Was database development effectively coordinated with software or application development? How do we handle partial deployments (i.e. the need to deploy one application feature, without another, less stable one)?
This was, essentially, caused by a lack of functionality. These automated solutions allowed for little more than compare and deploy. They lacked such capabilities already addressed in traditional SCM, such as the ability to rollback changes, check in/check out, file locking, baseline setting, merging code, change reports for auditing and regulations, and more. Developers soon realized that what they actually had was database deployment management – not database change management. They still didn’t have a way to successfully manage change.
|What is a Database?|
|The History and Future of Database Change Management|
|Fixing Corrupt Microsost Access Databases|
|How to Work Remotely and Still Be The Best|
|Getting in Touch with Big Data|
|Planning for Effective Data Warehouse Testing|
|Social Data Has Become Social Big Data|
|The Future of Data Centers: Achieving Agility in a Rapidly Shifting World|
|Here’s a News Year’s Resolution: Master Your Database|
|Making the Grade: Cost Savings Upgrades for Today's Data Center|
|How to Choose the Best DBA for Your Company|
|Virtualization: Wading Through the Deluge of Data|
|SQL Databases and Network Attached Storage|
|Why Big Data Needs Cloud|
|Ten reasons why you should use data models to build apps|
|Beware Big Schema|
|How to Implement Successful Data Integration Cross-Regionally|
|Forging a Path Beyond Hadoop - Software Database Mgmt Sys for Big Data Analytics|
|Database Tips and Tricks|
|Why Data Still Matters|