1 / 120

The Why's and How's of Migrating to Foundation

Learn about the process and considerations for migrating to Foundation in this comprehensive guide written by a senior systems engineer. Covers migration issues, ONCONFIG parameters, SQL statement cache, and more.

spreciado
Download Presentation

The Why's and How's of Migrating to Foundation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Why's and How's of Migrating to Foundation Carlton Doe Senior Systems Engineer NewCo Software and Solutions, Inc.

  2. ? ISBN 0-13-605296-7 ISBN 0-13-080533-5 Who I Am • 10 + years of experience as DBA and engine admin mainly in retail environments • Co-founded, presided over, and currently sit on the Board of Directors of the International Informix Users Group (IIUG) • Written two Informix Press books:

  3. What I’m Going to Cover • What is IDS and Foundation? • Migration issues • ONCONFIG parameters • SQL statement cache • Max Connect • Fuzzy Checkpoints • Smart large objects Dbspaces • Simple/complex datatypes and casting • User Defined Routines and Functions • Collections • DataBlades and the Blade Manager • Java in the engine

  4. Parallelism built-in Parallel Data Query SQL92 Entry Level Enterprise Replication Many, many more... SQL 3 Support DataBlade Support DataBlade Developer Kit User Defined Routines User Defined Datatypes User Defined Indexing R-Tree Indexing Extended B-Tree Support Row Types Collections (sets, multiset, lists) Inheritance Polymorphism Partitioning with new data types Schema Knowledge What Is IDS and Foundation? INFORMIX-Dynamic Server 7.x IDS - UDO 9.1x

  5. IDS 9.2 + What Is IDS and Foundation? INFORMIX-Dynamic Server 7.x + IDS - UDO 9.1x =

  6. Then What Is Foundation? Java VM Web DataBlade Excalibur Text Datablade Object Translator Office Spigot Informix Dynamic Server

  7. Which Version Is Loaded?? (IDS) Odra: onstat - Informix Dynamic Server 2000 Version 9.21.UC2 (Foundation) Ebro: onstat - Informix Dynamic Server 2000 Version 9.21.UC2 (MSGPATH) Wed Oct 4 07:49:52 2000 07:49:52 Booting Language <builtin> from module <> 07:49:52 Loading Module <BUILTINNULL> 07:49:57 Informix Dynamic Server 2000 Version 9.21.UC2 Software Serial Number AAB#J500705 07:50:12 Informix Dynamic Server 2000 Initialized -- Shared Memory Initialized 07:50:15 Physical Recovery Started 07:50:25 Physical Recovery Completed: 0 pages restored

  8. Which Version Is Loaded?? Ebro: cd /usr/informix/9_21/extend Ebro: ls -l drwxr-xr-x 4 informix informix 1024 Jul 19 08:08 ETX.1.30.UC5 drwxr-xr-x 4 informix informix 1024 Jul 19 08:07 TXT.1.10.UC5 drwxr-xr-x 4 informix informix 1024 Jul 19 09:02 VTS.1.20.UC1 drwxr-xr-x 2 informix informix 1024 Jul 19 07:42 ifxbuiltins.1.1 drwxr-xr-x 2 informix informix 1024 Jul 19 07:42 ifxmngr drwxr-xr-x 2 informix informix 1024 Jul 19 07:42 ifxrltree.2.00 drwxr-xr-x 4 informix informix 1024 Jul 19 07:49 krakatoa drwxr-xr-x 4 informix informix 1024 Jul 19 07:42 LLD.1.20.UC2 drwxr-xr-x 4 informix informix 1024 Aug 22 13:36 WEB.4.10.UC1

  9. Migration Issues

  10. Migration Issues: Things to Know • You can not directly migrate from < 7.x versions • Informix STRONGLY suggests that you first move to the latest available 7.x version then move to 9.x • While you may want to export/import your data, in-place upgrades have been quite successful -- just remember to wear your belt and suspenders • HP-UX 11.0 - RUN_AS_ROOT.server script is wrong Links created for one library goes into /usr/lib instead of /usr/lib/pa20_64; if left alone, engine will not come up (“library not found” error). A “defect” has been entered

  11. Migration Issues: Things to Know • With IDS, CPU and user-defined VPs are run as user “informix” while other VPs are still run as root. This prevents UDRs from having root access • If you have auto-start scripts, you should modify them so they “su” to “informix” before starting the engine • Install engine after version 2.2 client products for “finderr” to work. The order is: • tools • network • engine

  12. Migration Issues: Things to Know • Precision change in float / smallfloat to decimal conversion smallfloat 8 -> 9 float 16 -> 17 • Default behavior of “create index” is now to create detached indexes. 7.x attached indexes are supported by the upgrade process • Use the “DEFAULT_ATTACH” variable to temporarily mimic 7.x behavior, it will be discontinued however in a future release

  13. Migration Issues: Things to Know • If using DRDA Gateway version 7.31.UC1 with IDS 7.x, you must get the DRDA patch to avoid defect #132619 • HADR has changed. DRAUTO is no longer supported so after a HADR failure condition is declared, the secondary server goes to read-only mode • You must manually convert the secondary server to “standard” mode “onmode -d standard” • No changes to DRINTERVAL, DRTIMEOUT, DRLOSTFOUND

  14. Migration Issues: Things to Know • 9.2 ER and HPL -- may not support named types. This should be in place for 9.3 Test dbexport / dbimport to see if more complex named types work • Shut down HADR / ER before upgrading

  15. Migration Issues: Things to Know • System catalog changes - columns added, moved, data types changed. New tables added and some dropped. Sysindexes and systables now a view rather than tables • Long identifiers!! The 8/18 rule is dead (the crowd cheers!!) user ids - 32 characters identifiers - 128 characters table name, server name, database name, column name, index name, synonym name, constraint name, procedure name, dbspace name, blobspace name, optical cluster name, trigger name, type name, routine language name, access method name, operator class name, trace message class name, trace message name, procedure variable names

  16. Migration Issues: Things to Know • Each user connection uses about 5% more shared memory. Watch your shared memory allocations - both instance and operating system • This can be offset by using MaxConnect

  17. Migration Issues: New Reserved Words • CACHE • COSTFUNC • ITEM • SELCONST • INNER • JOIN • LEFT • LOCKS • RETAIN

  18. ALLOW_NEWLINE BLOCKTIMEOUT DD_HASHMAX DD_HASHSIZE PC_HASHSIZE PC_POOLSIZE SBSPACENAME SYSSBSPACENAME STMT_CACHE STMT_CACHE_SIZE VPCLASS JDKVERSION JVPHOME JVPLOGFILE JVPPROPFILE JVPJAVAVM JVPJAVAHOME JVPJAVALIB JVPCLASSPATH JVMTHREAD Migration Issues:New ONCONFIG Parameters

  19. Migration Issues:New ONCONFIG Parameters VPCLASS • Enables you to designate and create uniquely named, user-defined, or system classes of VPs. User-defined VPs have the same functional power as CPU VPs • User-defined VPs should be created to execute user- defined routines and/or DataBlades • Some DataBlades (Verity, Excalibur Text) require their own VP as does the JVM • User-defined VPs can be added/dropped on the fly with the “onmode -p class_name” command

  20. Migration Issues:New ONCONFIG Parameters VPCLASS Syntax tree: VPCLASS name,num=X,[max=X,aff=(X-Y),noyield,noage] • VP name is not case sensitive, options are order independent, no whitespace, 128 char max/entry • Must declare multiple JVPs to execute Java UDRs in parallel (Unix) • If using “noyield”, only declare “1” vp since the UDR will execute serially, causing others to queuefor their turn

  21. Migration Issues:New ONCONFIG Parameters Use the VPCLASS parameter to replace the CPU and AIO ONCONFIG parameters VPCLASS cpu,num=3,max=10,noage Comment out AFF_SPROC, AFF_NPROCS, NOAGE, NUMCPUVPS, SINGLECPUVP parameters VPCLASS aio,num=5,max=10 Comment out NUMAIOVPS parameter

  22. Migration Issues: How do I Migrate? 1. Stop database processing and force a change to a new logical log onmode -sy; onmode -l; onmode -c; onmode -ky; 2. Back up the instance(s) and critical configuration files in $INFORMIXDIR etc/ /aaodir/adtcfg$ONCONFIG (s)** /dbssodir/adtmasks ONCONFIG.stdsm_versions sqlhosts**tctermcaptermcap

  23. Migration Issues: How do I Migrate? 3. Restart the instance(s) and put them into single-user mode to ensure all open transactions are properly rolled back oninit -sv 4. Verify data and index integrity with the oncheck utility (-cr, -ce -cc, -c [ I / D ] db_name) 5. Shut the instance(s) down again and create level 0 backup(s) and/or dbexports 6. Install the new IDS / Foundation software and change relevant scripts, global parameters, config files

  24. My Recommended Directory Structure 7_3 9_21 scripts disks (also tapes) ISA logs config_files max_conn koetari -- symbolic links odra -- symbolic links /usr/informix Environment Variables: INFORMIXDIR ONCONFIG INFORMIXSQLHOSTS (current logs) /old_logs

  25. Migration Issues: How Do I Migrate? 7. Change ALARMPROGRAM to [ null | no_log.sh ] if using On-Bar or LTAPEDEV to /dev/null 8. Install any DataBlades 9. Restart each instance into quiescent mode and monitor the MSG_PATH file. Sysmaster and reserved pages conversion is automatic (see $INFORMIXDIR/etc/dummyupds7x.sql). When completed, a notice is written to MSG_PATH 10. Execute an “update statistics high” command on the sysmaster database. Standard “update stats” command on all others

  26. Migration Issues: How Do I Migrate? 11. Verify data integrity with the oncheck command 12. Change LTAPEDEV / ALARMPROGRAM back to original value 13. Create a level 0 back-up 14. Let the users back in If there are any problems -- RESTORE from tape

  27. SQL Statement Cache

  28. SQL Statement Cache: Introduction As implied, prepared and optimized SQL statements are stored in a section of the virtual portion of shared memory. Users executing identical DML SQL statements can pull the pre-parsed query plan and even query data structures from the cache More complex queries (multiple columns, many filters in the WHERE clause) benefit most from caching Turned off by default, the STMT_CACHE $ONCONFIG parameter and the “SET STATEMENT CACHE” SQL command control how the caching process works. Can be overridden with the “onmode -e” command

  29. SQL Statement Cache: How It Works 1. Incoming SQL statements are hashed to a value which is compared to other hash values in the SSC section of the virtual portion of shared memory Caveats: • statements only optimized at the PREPARE phase • host variable names/placeholders are not included in the hash algorithm • white spaces and case of the statement is significant • session-specific parameters (OPTCOMPIND, OPT_GOAL) are significant and will affect hash value

  30. SQL Statement Cache: How It Works 2. If hash value matches an existing value, the cached copy is used. The existing copy is NOT re-optimized 3. If a match does not exist, the statement is evaluated for inclusion in the SSC Qualifications for inclusion in the SSC: • contains DML statements using built-in data types and operators • no UDRs/UDFs • local database connections only • no explicit temporary tables

  31. SQL Statement Cache: How It Works Qualifications for inclusion in the SSC (cont): • “select *” statements can be fully expanded to include all columns in the listed tables • is not a statement generated by a stored procedure • no imbedded subselects in the SELECT statement

  32. SQL Statement Cache:Enabling and Sizing onmode -e {ENABLE|ON|OFF|FLUSH} • ENABLE - Allows statement caching to take place. This does not turn caching on, but allows it to be turned on • ON - Turns on statement caching. Caching must first be enabled through the STMT_CACHE parameter or “onmode” command • OFF - Turns off the SSC and immediately disables sharing. Cached statements that are in use remain in the SSC until their use count reaches zero • FLUSH - Active statements are marked and subsequently flushed when they are released. When the SSC is turned off, sharing is immediately disabled

  33. SQL Statement Cache:Enabling and Sizing STMT_CACHE $ONCONFIG parameter: 0 - SQL Statement Cache disabled (default) 1 - SSC enabled though sessions must explicitly call for its use 2 - SSC enabled for all sessions unless explicitly denied on a session-by-session basis Set the STMT_CACHE environment variable to “0” (off) or “1” (on) Execute a “set statement cache [ on | off ]” command

  34. SQL Statement Cache:Enabling and Sizing STMT_CACHE_SIZE $ONCONFIG parameter:determines the size of the SSC in KBs, default = 524 kb If SSC is full and all statements are active, if additional statements qualify for inclusion the SSC will expand to hold them. As a statement’s user counts drop to zero, the SSC will flush them and reduce back to STMT_CACHE_SIZE

  35. SQL Statement Cache:Monitoring and Tuning • “onstat -g cac stmt” displays size of the SSC the cached statements, who’s currently executing a statement, and the number of times it’s been used • The “hash” column is not the statement’s hash value, rather the hash bucket in which the statement resides • To tune, monitor the size of the SSC over time, then resize as appropriate

  36. SQL Statement Cache:Monitoring and Tuning Use the STMT_CACHE_DEBUG environment variable to trace what happens with any given SQL statement export STMT_CACHE_DEBUG=debug_value:path_to_file debug_value can be “1” (basic) or “2” (extended)

  37. SQL Statement Cache:Monitoring and Tuning SSC identity (OK:status=1): select * from yoyo SSC link (semantic error): select * from yoyo SSC identity (stmt not found): create table yoyo (col1 int) in mustang_2 SSC qualify (invalid stmt type): create table yoyo (col1 int) in mustang_2 SSC identity (stmt not found): insert into yoyo values (1) SSC qualify (OK): insert into yoyo values (1) SSC key insert (OK): insert into yoyo values (1) SSC full insert (OK): insert into yoyo values (1) SSC identity (stmt not found): insert into yoyo values (2) SSC qualify (OK): insert into yoyo values (2) SSC key insert (OK): insert into yoyo values (2) SSC full insert (OK): insert into yoyo values (2) SSC identity (OK:status=1): insert into yoyo values (2) SSC link (OK): insert into yoyo values (2) SSC identity (stmt not found): drop table yoyo SSC qualify (invalid stmt type): drop table yoyo

  38. What Are Fuzzy Checkpoints???

  39. Checkpoints IDS introduced “fuzzy” operations and checkpoints to increase transactional throughput Two types of checkpoints: • Full or “sync” • Fuzzy

  40. Steps of a Sync Checkpoint 1. Engine blocks threads from entering “critical sections” of code 2. The page cleaner thread flushes the physical log buffer to log on disk 3. The page cleaner threads flush to disk all modified pages in the buffer pool (chunk write) 4. The checkpoint thread writes a checkpoint record to the logical log buffer 5. The logical log buffer is flushed to the current logical log file on disk 6. The physical log on disk is logically emptied (current entries can be overwritten) 7. The checkpoint thread updates the reserved pages with the checkpoint record information

  41. What Causes Sync Checkpoints to Occur? • Physical log becomes 75% full • “onmode -c” or “-ky” • Administrative actions (adding dbspaces, altering tables) • A backup or restore operation using ontape or ON-Bar • End of fast recovery or full recovery • Reuse of a logical log containing the oldest fuzzy operation not yet synced to disk • LTXHWM reached with fuzzy transactions • Through the onmonitor menu • Good and bad: • logically and physically data, interrupts user activity, expensive (I/O hit), must tune to reduce duration

  42. Fuzzy Checkpoints Record “results” of fuzzy operations in the logical log Goal: reduce or eliminate checkpoint interruptions, eliminate the physical log (over 3 - 4 phases) Cost: increased logical recovery time Note: Fuzzy checkpoints are not used in HADR environments

  43. Fuzzy Checkpoints Things to know: • Inserts, updates, delete DML statements • Currently supports “built-in” data types only (character, numeric values) • Logged databases • Not “old” pages. Determined by page timestamp (4 byte, cycles 2 gb -> -2 gb -> 0) • No BEFORE images are generated in the physical log • During “fuzzy” checkpoints, buffered information on fuzzy operations is **not** flushed to disk • After a fuzzy checkpoint, disks are not physically consistent

  44. Steps of a Fuzzy Checkpoint 1. Engine blocks threads from entering “critical sections” of code 2. The page cleaner threads flush to disk all **non-fuzzy** modified pages in the buffer pool (chunk write) Note: MLRU queues still full 3. A Dirty Page Table (DPT) is constructed containing entries (buffer addresses) for fuzzy operations 4. The DPT is flushed to the current logical log on disk4A. The page cleaner thread flushes the physical and logical log buffers to disk 5. The checkpoint thread updates the reserved pages with the checkpoint information including the Log Sequence Number (LSN). 6. The physical log on disk is logically emptied (current entries can be overwritten)

  45. Log Sequence Number (LSN) • A value representing a position in the logical log (log # and log position) • Stored in each dirty buffer heading containing fuzzy operation results along with a newly-stored timestamp • The oldest LSN is tracked in shared memory because it contains the oldest fuzzy operation. Oldest LSN can not be further back than 1 logical log and 2 checkpoints • Logs after the LSN can not be freed for re-use until a checkpoint has occurred Net Net -- large logical logs are GOOD!!!

  46. What Causes Fuzzy Checkpoints to Occur? • Checkpoint interval has elapsed • Physical log becomes 75% full • onmode -c fuzzy • Reuse of a logical log containing the last checkpoint (either fuzzy or sync)

  47. Recovery - Sync Checkpoint 1. Physical log data used to return all disk pages to original “synced” state (physical restore) 2. The most recent checkpoint (sync) record is located in the logical log files 3. All subsequent logical log records are rolled forward 4. Uncommitted transactions are rolled back

  48. Recovery - Fuzzy Checkpoint 1. Physical restore proceeds as normal however disk pages are not physically consistent since there are modified pages that were not logged in the physical log 2. Logical recovery begins in two phases: • Phase A -- reapply DPT records • Phase B -- post checkpoint records

  49. Recovery - Fuzzy Checkpoint Phase A - Logical Recovery: 1. Locate DPT created before last fuzzy checkpoint 2. Selectively apply fuzzy operations from the DPT/logical log from the oldest remaining logical log up to the most recent checkpoint Conditional update based on page timestamp - if DPT record is “newer” than page timestamp, then the change is applied Result: similar to end of physical restore when using sync checkpoints

  50. Recovery - Fuzzy Checkpoint Phase B - Logical Recovery: 1. Beginning with last checkpoint, roll forward subsequent logical log records. 2. Uncommitted transactions are rolled back Result: similar to the logical restore when using sync checkpoints. Disks are physically and logically consistent to last recorded transaction

More Related