safe, repeatable and predictable deployment process. And it should be painless That’s why we use the following techniques: ✤ Database deployment automation ✤ Continuous integration ✤ Deployment to many environments (Dev, Test, Staging, Production)
database server for development work 2. Always Have a Single, Authoritative Source For Your Schema 3. Always Version Your Database –K. Scott Allen http://odetocode.com/blogs/scott/archive/2008/01/30/three-rules-for-database-work.aspx
entire database state are stored for the each version. Deployment tool compares two states and generates a transformation script to upgrade database from one state to another. Tools: ✤ Red Gate Sql Compare ✤ Microsoft SQL Server Data Tools ✤ Other tools exist
for DSL based approach) represents a transition from one database state to the next one Deployment tool runs only new or changed scripts in a predefined order. Scripts execution history is tracked in database itself.
inspired by Rails active record migrations // Entity Framework Code First Migrations example public partial class AddBlogUrl : DbMigration { public override void Up() { AddColumn("dbo.Blogs", "Url", c => c.String()); } public override void Down() { DropColumn("dbo.Blogs", "Url"); } }
schema. After baseline is published (e.g. deployed to customer) any changes to the schema require a schema change script. So baseline scripts are run exactly once to create initial database schema and are not executed during database migration. We don’t usually baseline a database just after a project started. It makes sense to wait a little until db schema becomes more stable. Hey, we don’t have customers yet!
adding a table or column, adding an index or a constraint, etc. Also data conversion scripts. Migration utility controls that only new updates are executed on certain database.
a change to the repository. They are treated as read-only and append-only records. Once they are pushed to the public repository, they can't be changed or removed. You may however add another commit to fix mistakes or to change some behavior. Schema update scripts are similar ones – once they were executed against a public database, you can't change scripts anymore. The only way to fix something is to write another schema update script. Commits like the update scripts represent historical changes. You may apply subsequent commits to any previous state to get the current state of the repository (working copy). Similarly you may upgrade any old version of database to the current state by applying new update scripts. One thing that does not work like in VCS is rollback - after you've migrated database to a certain version, you can't actually revert it back to some previous version (unless we have rollback scripts).
a separate files per object. Engine executes them any time they have changed ✤ Easy to change ✤ Change history is available in source repository ✤ No merge conflicts
Plain Old SQL Scripts ✤ Low script maintenance. Convenient scripts folder structure ✤ Conventions over configuration ✤ Baseline, OneTime, AnyTime, EveryTime scripts ✤ Runs Only Changes - useful for database with a lot of stored procs ✤ Supports Environment Aware scripts ✤ Token replacements ✤ Versioning ✤ Audit - tracks who applied changes, when and how ✤ Detailed logging ✤ Open Source
in the LOCAL environment. This will run based on this check. Running LOCAL.GrantRobDataReaderDataWriterPermissions.ENV.sql on (local) - TestRoundhousE. TEST.GrantRobDataReaderDataWriterPermissions.ENV.sql is an environment file. We are in the LOCAL environment. This will NOT run based on this check. Skipped TEST.GrantRobDataReaderDataWriterPermissions.ENV.sql - No changes were found to run. If you name a sql file with .ENV., Roundhouse will start looking for the environment name somewhere on the file. If it doesn't find the proper environment, it will not run the file. In example, if you name a file LOCAL.permissionfile.env.sql and it is migrating the LOCAL environment, it will run the file. But if it is migrating the TEST environment, it will pass over this file.
in my experience DECLARE @UpdatesAllowed INT = 1 IF '{{EnvironmentName}}' IN ('PRODUCTION', 'UAT') BEGIN PRINT 'Running on ''{{EnvironmentName}}'' environment. Updates are not allowed' SET @UpdatesAllowed = 0 END MERGE dbo.Config dest USING @Config src ON dest.[Key] = src.[Key] WHEN MATCHED AND dest.Value <> src.Value AND @UpdatesAllowed = 1 THEN UPDATE SET Value = src.Value WHEN NOT MATCHED THEN INSERT ([Key], Value) VALUES (src.[Key], src.Value);
once, it’s a good practice to keep them idempotent. So it’s safe to rename a script or reapply it manually. RoundhousE behaviour could be changed to re-run changed scripts with WarnOnOneTimeScriptChanges option
could live in runAfterOtherAnyTimeScripts if written in idempotent way, e.g. using MERGE statements. ✤ Sample data could live in runAfterCreateDatabase so that it’s executed only once - after database is created. Environment specific script is another option. Or could be managed outside of the RondhousE.
part of the CI process. 1. Have a reference backup of released database version. 2. Have a step on CI build that restores database from that backup and runs migrations against it.
Scott Allen - Versioning Database series (link to the latest post with all the links) ✤ Martin Fowler - Evolutionary Database Design article ✤ Scott J Ambler and Pramod J. Sadalage - Database refactoring: Evolutionary Database Design book ✤ Pramod J. Sadalage - Talk on “Database refactoring: Evoltionary Database Design” ✤ Enrico Campidoglio - “Databases: the elephant in the Continuous Delivery room” talk at NDC ✤ Alexey Diyan - “Database automated deployment and versioning …for smart people”
Beletsky (@alexbeletsky) and Kiev Alt Net group for introducing RoundhousE for me for the first time ✤ Denis Reznik (@DenisReznik) for forcing me to make this talk