DB2 backups are platform specific. Well, pretty much, it’s a bit complicated.
Recently, someone sent me a Windows DB2 backup for me to look at, and I wanted to move it to a Linux machine.
Here’s some steps about how you would do this.
1. On the source machine run:
db2look -e -o database.sql -l -d
where is the name of the database you want to move.
2. Copy the database.sql to your destination machine.
3. Create a database on the destination machine. You can use the same name, or use a different name and edit first connect statement in the database.sql file to point to the new name.
4. If you look at the database.sql, you will notice that the CREATE TABLESPACE commands will have paths in them. This will probably present a problem, since the paths are unlikely to exist on your destination machine. There are a number of ways you could deal with this, but the easiest in my opinion is to just use DB2’s automatic storage feature and let DB2 worry about it. To show you what I mean by this, here is a tablespace definition from my source database script, straight from db2look.
CREATE REGULAR TABLESPACE "ICMLFQ32" IN DATABASE PARTITION GROUP IBMDEFAULTGROUP PAGESIZE 32768 MANAGED BY SYSTEM
USING ('C:\DB2\NODE0000\SQL00001\ICMLFQ32')
EXTENTSIZE 32
PREFETCHSIZE AUTOMATIC
BUFFERPOOL ICMLSMAINBP32
OVERHEAD 7.500000
TRANSFERRATE 0.060000
DROPPED TABLE RECOVERY ON;
You can see that a Unix/Linux DB2 install would puke on the path in there. Using automatic storage command this becomes:
CREATE TABLESPACE ICMLFQ32 PAGESIZE 32 K MANAGED BY AUTOMATIC STORAGE BUFFERPOOL ICMLSMAINBP32
Much simpler, but you will need to change these tablespace definitions by hand.
5. After fixing up the tablespace paths, try executing:
db2 -tvf database.sql
I had a problem on my system, doing a Windows -> Linux restore. The database.sql had Windows line endings in it, and so when running it on Linux the command just hung. Running dos2unix database.sql fixed it. (This problem was the actual point of this post, but it seemed hard to explain by itself, so…. here we are).
6. Now you should have a skeleton database on the destination, with all the tables there just ready to receive data. To copy the actual data, we’ll be using db2move.
On the source machine, create a new folder and change into it, and then run:
db2move export
7. This should fill the folder up with .ixf and .msg files . Copy the entire folder to your destination machine.
8. On the destination machine change to the export folder you just copied over and run:
db2move load
9. db2load temporarily suspends the referential integrity constraints of DB2. This is done so it doesn’t matter which order you load tables in. But since you have suspended these checks while the data is being loaded in you will need to go and make db2 check each table, to make sure that any referential integrity constraints in the database are valid. If you don’t, you get back a message like this when accessing any tables that you have loaded:
SQL0668N Operation not allowed for reason code "1" on table "".
This means – “You need to check this table, to make sure it is valid” .
To do this, you can query the system catalog table to get back a list of all the tables that need checking, and with a little awk, generate a script to run against the database. There’s probably a prettier way to do this, but it works.
db2 connect to
db2 -x select "tabschema,tabname from syscat.tables where status = 'C'" | awk '{print "SET INTEGRITY FOR "$1"."$2 " IMMEDIATE CHECKED;"}' > check.sql
db2 -tvf check.sql
Now you’re done. If you run a simple select statement against one of the tables you have loaded into the database, DB2 should return with the data. It’s worth noting that this method will work if changing versions of DB2, and will also cover moving from 32 bit versions of DB2 to 64 bit.