1 / 58

DB-10: What s New in the OpenEdge RDBMS

DB-10: What's New in the OpenEdge RDBMS. 10.1B The Enterprise Release. Very Large Database Support64 bit featuresMore areas

fayre
Download Presentation

DB-10: What s New in the OpenEdge RDBMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. OpenEdge 10.1B RDBMS & SQL Enhancements DB-10: What’s New in the OpenEdge® RDBMS

    2. DB-10: What’s New in the OpenEdge® RDBMS 10.1B – The Enterprise Release Very Large Database Support 64 bit features More areas & memory User Activity Insight New VSTs Ongoing online maintenance improvements Enable AI online A primary focus of OpenEdge 10.1B is on the essential needs of enterprise customers. For the database, this means implementation of changes such as a new 64-bit “large integer” data type, and large database features including introduction of 64-bit sequences, large index key entries, 64-bit ROWIDs/DBKEYs, and the ability to define storage areas number up to 32000. Large database support is one of the critical features of this release, in particular the removal of the two billion row limit in the database . Also for enterprise customers, 10.1B continues to improve upon the online maintenance features, such as support for enabling after-imaging online A primary focus of OpenEdge 10.1B is on the essential needs of enterprise customers. For the database, this means implementation of changes such as a new 64-bit “large integer” data type, and large database features including introduction of 64-bit sequences, large index key entries, 64-bit ROWIDs/DBKEYs, and the ability to define storage areas number up to 32000. Large database support is one of the critical features of this release, in particular the removal of the two billion row limit in the database . Also for enterprise customers, 10.1B continues to improve upon the online maintenance features, such as support for enabling after-imaging online

    3. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements

    4. DB-10: What’s New in the OpenEdge® RDBMS Agenda 64-bit ROWIDs INT64 data type 64-bit Sequences 32,000 storage areas per database Shared memory limits Large index key entries

    5. DB-10: What’s New in the OpenEdge® RDBMS 10.1B Large Database Support In OpenEdge 10.1B PROUTIL provides a way to interrogate what features are enabled for a database, without having to write ABL code to interrogate tables _Database-Feature and _Code-feature; a full list of possible features is below. The above table shows which 10.1B features are available, depending on the release number and status of the database: 10.1A migrated: An existing OpenEdge database opened with a 10.1B executable (for write access) will automatically have support for large ROWID (dbkey) entries. Note that in all cases, the 64bit rowid/dbkey applies only to Type 2 Storage Areas. Support for Type 1 Storage Areas would require a full dump and load - and that level of impact is not a goal of this release. New 10.1B: A database newly created with Release 10.1B will have all large database features enabled. Note that large index key entries are only available on databases with 4K and 8K block sizes; and 64-bit ROWIDs apply to Type 2 Storage Areas only. The INT64 data type is available on 10.1B for all databases. PROUTIL dbname –C describe Shows the currently enabled features for a given 10.1B+ database. Feature ID Name 1 OpenEdge Replication 3 DataXtend Remote Edition 4 Reserved 5 Large Files 6 Database Auditing 7 JTA 8 After Image Mangement/Archiver 9 64 Bit DBKEYS 10 Large Keys 11 64 Bit Sequences 12 DataXtend Integration Edition Save Key Events Failover Clusters In OpenEdge 10.1B PROUTIL provides a way to interrogate what features are enabled for a database, without having to write ABL code to interrogate tables _Database-Feature and _Code-feature; a full list of possible features is below. The above table shows which 10.1B features are available, depending on the release number and status of the database: 10.1A migrated: An existing OpenEdge database opened with a 10.1B executable (for write access) will automatically have support for large ROWID (dbkey) entries. Note that in all cases, the 64bit rowid/dbkey applies only to Type 2 Storage Areas. Support for Type 1 Storage Areas would require a full dump and load - and that level of impact is not a goal of this release. New 10.1B: A database newly created with Release 10.1B will have all large database features enabled. Note that large index key entries are only available on databases with 4K and 8K block sizes; and 64-bit ROWIDs apply to Type 2 Storage Areas only. The INT64 data type is available on 10.1B for all databases. PROUTIL dbname –C describe Shows the currently enabled features for a given 10.1B+ database. Feature ID Name 1 OpenEdge Replication 3 DataXtend Remote Edition 4 Reserved 5 Large Files 6 Database Auditing 7 JTA 8 After Image Mangement/Archiver 9 64 Bit DBKEYS 10 Large Keys 11 64 Bit Sequences 12 DataXtend Integration Edition Save Key Events Failover Clusters

    6. DB-10: What’s New in the OpenEdge® RDBMS 64-Bit ROWIDs 10.1B removes the 2 billion row limit Rows per area: ~281 trillion Rows per DB: ~ 9 quintillion Type II Storage Areas only Increased addressable space using 64-bits User visibility limited to OpenEdge Management PROMON Utilities Database .lg file and PROMSGS Find by ROWID Max rows per area: ~281 trillion Max rows per database ~9 quintillion A dbkey addresses a block within an OpenEdge storage area. A 32-bit dbkey can access 2 billion blocks. Take the example of one record per block and the result is 2 billion records of address space (this is the trivial example) within a storage area. With one table per storage area, there can be only 2 billion rows per table. Release 10.1B removes the 2-billion row limit (for Type 2 storage areas only) by increasing the addressable space for a storage area to 64-bits, and thus a 64-bit dbkey. 64-bit dbkey values are visible in promon, utilities, and output to the database log file Max rows per area: ~281 trillion Max rows per database ~9 quintillion A dbkey addresses a block within an OpenEdge storage area. A 32-bit dbkey can access 2 billion blocks. Take the example of one record per block and the result is 2 billion records of address space (this is the trivial example) within a storage area. With one table per storage area, there can be only 2 billion rows per table. Release 10.1B removes the 2-billion row limit (for Type 2 storage areas only) by increasing the addressable space for a storage area to 64-bits, and thus a 64-bit dbkey. 64-bit dbkey values are visible in promon, utilities, and output to the database log file

    7. DB-10: What’s New in the OpenEdge® RDBMS INT64 Data Type Available to all 10.1B databases Store integers > 2GB 32-bit range -2,147,483,648 +2,147,483,647 64-bit range -9,223,372,036,854,775,808 +9,223,372,036,854,775,807 OpenEdge 10.1B sees the addition of a new data type, INT64, to the OpenEdge database (and to the ABL and SQL) Use INT64 data type to store integers larger than 2 GB and less than -2 GB. It is through internal use of the INT64 data type that the database is able to support many of the large features, such as INT64, large ROWIDs and large Sequences OpenEdge 10.1B sees the addition of a new data type, INT64, to the OpenEdge database (and to the ABL and SQL) Use INT64 data type to store integers larger than 2 GB and less than -2 GB. It is through internal use of the INT64 data type that the database is able to support many of the large features, such as INT64, large ROWIDs and large Sequences

    8. DB-10: What’s New in the OpenEdge® RDBMS Migrating to INT64 Can change existing INTEGER fields to INT64 Dump and load NOT required Manual conversion NOT required Index rebuild NOT required Exclusive schema access required Re-compile IS necessary One way conversion only Existing database can upgrade vst values proutil <dbname> –C updatevst In the database INT64 type can be specified for a new field, or for an existing INTEGER field. You can change an existing INTEGER field to INT64, without requiring a dump and load, or a manual conversion of the field’s data (as would be required when changing any other field’s data type); any index impacted will not need to be rebuilt. A re-compile IS necessary since changing a fields data type causes a CRC change to the table, so any r-code that references the table will be invalid after the data type is changed and will require compiling. No physical change is made to the data or indexes involved when a change is made to INT64. You cannot change from INT64 back to INTEGER, except by doing a dump and reload. Changing the data type of a key field in a table (e.g. Customer.CustNum) will not report on any key-constraints, or change other instances of the key field (in other tables). The type change from integer to int64 requires an exclusive schema lock, so is not an online activity. There are no anticipated differences in performance for an INT64 versus an INT data type, for the database or for utils such as index rebuild, etc. The BIWs should be able absorb any potential performance hit (Enterprise licenses)In the database INT64 type can be specified for a new field, or for an existing INTEGER field. You can change an existing INTEGER field to INT64, without requiring a dump and load, or a manual conversion of the field’s data (as would be required when changing any other field’s data type); any index impacted will not need to be rebuilt. A re-compile IS necessary since changing a fields data type causes a CRC change to the table, so any r-code that references the table will be invalid after the data type is changed and will require compiling. No physical change is made to the data or indexes involved when a change is made to INT64. You cannot change from INT64 back to INTEGER, except by doing a dump and reload. Changing the data type of a key field in a table (e.g. Customer.CustNum) will not report on any key-constraints, or change other instances of the key field (in other tables). The type change from integer to int64 requires an exclusive schema lock, so is not an online activity. There are no anticipated differences in performance for an INT64 versus an INT data type, for the database or for utils such as index rebuild, etc. The BIWs should be able absorb any potential performance hit (Enterprise licenses)

    9. DB-10: What’s New in the OpenEdge® RDBMS Using INT64 All tools work with INT64 Data Dictionary, Admin, DB Navigator, etc Dump/Load Data and Definitions Reports Applying incremental (delta) .df files No impact on target database indexes update field "Cust-num" of "customer" as int64 “Dump Data Definitions” and “Load Data Definitions” options in Data Admin Tool continue to work with INT64 fields. The ‘Create incremental .df file’ utility (Data Admin Tool) detects and allows INTEGER fields to be changed to INT64, generating an incremental .df file accordingly. Changing fields, from INT to INT64, which participate in indexes impacted has no effect on the index, which will not need to be redefined or rebuilt in any way. Generally, an incremental .df file will drop/create a new field. For INTEGER to INT64 changes, the field is updated to the new type, and data is not lost (the same is NOT true in the reverse direction – INT64 fields being changed would be dropped and recreated as INT.) Note that the INT64 data type is only available for 10.1B databases. The Data Dictionary and the Data Administration will prevent users from creating INT64 fields or updating INTEGER fields to INT64 if the database is served by a pre-10.1B database server (also valid for load of data definitions). UPDATE FIELD "BinNum" OF "Bin" AS int64 UPDATE FIELD "CustNum" OF "Customer" AS int64 UPDATE FIELD "Discount" OF "Customer" AS int64 . PSC cpstream=ISO8859-1 . 0000000145“Dump Data Definitions” and “Load Data Definitions” options in Data Admin Tool continue to work with INT64 fields. The ‘Create incremental .df file’ utility (Data Admin Tool) detects and allows INTEGER fields to be changed to INT64, generating an incremental .df file accordingly. Changing fields, from INT to INT64, which participate in indexes impacted has no effect on the index, which will not need to be redefined or rebuilt in any way. Generally, an incremental .df file will drop/create a new field. For INTEGER to INT64 changes, the field is updated to the new type, and data is not lost (the same is NOT true in the reverse direction – INT64 fields being changed would be dropped and recreated as INT.) Note that the INT64 data type is only available for 10.1B databases. The Data Dictionary and the Data Administration will prevent users from creating INT64 fields or updating INTEGER fields to INT64 if the database is served by a pre-10.1B database server (also valid for load of data definitions). UPDATE FIELD "BinNum" OF "Bin" AS int64 UPDATE FIELD "CustNum" OF "Customer" AS int64 UPDATE FIELD "Discount" OF "Customer" AS int64 . PSC cpstream=ISO8859-1 . 0000000145

    10. DB-10: What’s New in the OpenEdge® RDBMS Existing INTEGER to INT64 Default format same as integer Character mode Supported DB Navigator Supported CONTAINS BUILDS Updating an existing INTEGER field – GUI The Data Dictionary allows an existing INTEGER field to have its data type changed to INT64. You cannot change an INT64 field (back) to an INTEGER data type. A new button has been added for INT fields, to support the one-way change to INT64; if a change is made, a confirmation dialog is presented. When adding a new field, a new entry named ‘INT64’ is available in the ‘Data Type’ combo-box. The default format for INTEGER and INT64 fields is ‘->,>>>,>>9’. A new format for INT64 fields has been added to the examples provided when the user clicks on the ‘Examples…’ button next to the ‘Format’ field, which is ‘->,>>>,>>>,>>9’. The Dictionary allows any value between -9223372036854775808 and +9223372036854775807 (inclusive) as the ‘Initial Value’ when the data type is INT64. The default ‘Initial Value’ is ‘0’, the same as for INTEGER fields. Click on the ‘-> int64’ button and the Data Dictionary asks for confirmation and alerts to the fact that once the field’s type is changed to INT64. Once committed, the only way to go back is through a manual conversion, or dump/load. The dialog contains a toggle-box with the label ‘Don’t show me this again (for this session)’. Check the toggle-box, and the confirmation message is not displayed again during the current session. Once a field type is changed to ‘INT64’, the “-> int64” button becomes disabled; it is not displayed for int64 fields. When modifying an existing INT field in TTY the “Data-Type” field is enabled for input. Only “INT64” will be accepted as a valid change (you can enter other values, e.g. “character”, but they are ignored.) A confirmation dialog is presented to confirm the change. Once made and committed, only way to change the field back is through a manual conversion, or a dump and load. The ‘Data Type’ field is updatable only for ‘INTEGER’ data types; the field remains read-only for all other data types. The TTY Data Dictionary Tool allows any value between -9223372036854775808 and 9223372036854775807 (inclusive) as the ‘Initial Value’ for an INT64. CONTAINS BUILDS Updating an existing INTEGER field – GUI The Data Dictionary allows an existing INTEGER field to have its data type changed to INT64. You cannot change an INT64 field (back) to an INTEGER data type. A new button has been added for INT fields, to support the one-way change to INT64; if a change is made, a confirmation dialog is presented. When adding a new field, a new entry named ‘INT64’ is available in the ‘Data Type’ combo-box. The default format for INTEGER and INT64 fields is ‘->,>>>,>>9’. A new format for INT64 fields has been added to the examples provided when the user clicks on the ‘Examples…’ button next to the ‘Format’ field, which is ‘->,>>>,>>>,>>9’. The Dictionary allows any value between -9223372036854775808 and +9223372036854775807 (inclusive) as the ‘Initial Value’ when the data type is INT64. The default ‘Initial Value’ is ‘0’, the same as for INTEGER fields. Click on the ‘-> int64’ button and the Data Dictionary asks for confirmation and alerts to the fact that once the field’s type is changed to INT64. Once committed, the only way to go back is through a manual conversion, or dump/load. The dialog contains a toggle-box with the label ‘Don’t show me this again (for this session)’. Check the toggle-box, and the confirmation message is not displayed again during the current session. Once a field type is changed to ‘INT64’, the “-> int64” button becomes disabled; it is not displayed for int64 fields. When modifying an existing INT field in TTY the “Data-Type” field is enabled for input. Only “INT64” will be accepted as a valid change (you can enter other values, e.g. “character”, but they are ignored.) A confirmation dialog is presented to confirm the change. Once made and committed, only way to change the field back is through a manual conversion, or a dump and load. The ‘Data Type’ field is updatable only for ‘INTEGER’ data types; the field remains read-only for all other data types. The TTY Data Dictionary Tool allows any value between -9223372036854775808 and 9223372036854775807 (inclusive) as the ‘Initial Value’ for an INT64.

    11. DB-10: What’s New in the OpenEdge® RDBMS Enabling 64-bit Sequences New databases - Implicitly enabled Pre-existing databases – Explicitly enabled No r-code version change Enable via dbadmin, data dictionary or: proutil <dbname> –C enableseq64 You can enable large sequences using PROUTIL ENABLESEQ64. Running this utility physically alters the layout of the sequence structure block, and converts the data type of fields _Sequence._Seq-Init, _Sequence ._Seq-Min, _Sequence._Seq-Incr and _Sequence._Seq-Max to INT64. The size allocated to each sequence is doubled from 4 bytes to 8 bytes, allowing the greater range, from 32-bit to 64-bit, i.e.: from between the 32-bit range of –between -2,147,483,648 and 2,147,483,647, to the 64-bit range of -9223372036854775808 and 9223372036854775807. As the size of the existing sequence structures increases, and depending on the number of existing sequences in the database, the process of enabling 64-bit sequences may result in additional sequence blocks being created. The maximum number of sequences allowed per database has not changed. The number of unique sequences supported in a database is determined by Block Size, the formula is: Maximum Sequence Number = 250 * (Block Size / 1024) (So note the implication here that an application using more than 250 sequences will not work on all database block sizes. Database Block Size Maximum Number of Sequences 1024 bytes (1K) 250 2048 bytes (2K) 500 4096 bytes (4K) 1000 8192 bytes (8K) 2000 The 2GB lower/upper limit on Sequences pre-10.1B has meant that some customers had to implement some other mechanism to generate values outside of this range. This limit is imposed due to the 32-bit nature of Sequences, which support numbers in the range of -231 to 231-1 (-2,147,483,648 to 2,147, 483, 647). OpenEdge 10.1B removes this limit with the introduction of 64-bit sequence values, which have a range of -263 to 263-1 The default for Sequences in 10.1B+ databases is 64bit (i.e. the 10.1B “empty” database has 64-bit Sequences). A pre-10.1B database enabled to support 64-bit sequences cannot not be reverted back to 32 bit sequences. Revert: This database contains 64 bit sequences. Revert: The database can not be reverted The utilities to Dump and Load sequences definitions (in the Data Admin Tool) also handle the definitions and values of large sequences. Note: Prior to OpenEdge release 10.1B, overflows did NOT raise an error when loading sequence values. Therefore, you would never get an error when loading sequence values that were too big for an integer value. Starting in 10.1B, the load of sequences definitions and sequence values will report any overflow errors depending on the sequence type (32 vs. 64 bit) and abort the load process. You can enable large sequences using PROUTIL ENABLESEQ64. Running this utility physically alters the layout of the sequence structure block, and converts the data type of fields _Sequence._Seq-Init, _Sequence ._Seq-Min, _Sequence._Seq-Incr and _Sequence._Seq-Max to INT64. The size allocated to each sequence is doubled from 4 bytes to 8 bytes, allowing the greater range, from 32-bit to 64-bit, i.e.: from between the 32-bit range of –between -2,147,483,648 and 2,147,483,647, to the 64-bit range of -9223372036854775808 and 9223372036854775807. As the size of the existing sequence structures increases, and depending on the number of existing sequences in the database, the process of enabling 64-bit sequences may result in additional sequence blocks being created. The maximum number of sequences allowed per database has not changed. The number of unique sequences supported in a database is determined by Block Size, the formula is: Maximum Sequence Number = 250 * (Block Size / 1024) (So note the implication here that an application using more than 250 sequences will not work on all database block sizes. Database Block Size Maximum Number of Sequences 1024 bytes (1K) 250 2048 bytes (2K) 500 4096 bytes (4K) 1000 8192 bytes (8K) 2000 The 2GB lower/upper limit on Sequences pre-10.1B has meant that some customers had to implement some other mechanism to generate values outside of this range. This limit is imposed due to the 32-bit nature of Sequences, which support numbers in the range of -231 to 231-1 (-2,147,483,648 to 2,147, 483, 647). OpenEdge 10.1B removes this limit with the introduction of 64-bit sequence values, which have a range of -263 to 263-1 The default for Sequences in 10.1B+ databases is 64bit (i.e. the 10.1B “empty” database has 64-bit Sequences). A pre-10.1B database enabled to support 64-bit sequences cannot not be reverted back to 32 bit sequences. Revert: This database contains 64 bit sequences. Revert: The database can not be reverted The utilities to Dump and Load sequences definitions (in the Data Admin Tool) also handle the definitions and values of large sequences. Note: Prior to OpenEdge release 10.1B, overflows did NOT raise an error when loading sequence values. Therefore, you would never get an error when loading sequence values that were too big for an integer value. Starting in 10.1B, the load of sequences definitions and sequence values will report any overflow errors depending on the sequence type (32 vs. 64 bit) and abort the load process.

    12. DB-10: What’s New in the OpenEdge® RDBMS Removal of 1,000 Area Limit Maximum area number of 32,000 supported 32,000 areas assumed # extents/area increased to 1024 from 1000 Max Extent Size remains at 1TB Startup parameter Conserves memory allocation Restricts feature use OpenEdge 10.1B RDBMS supports a maximum storage area number of 32000 (increased from a limit of 1000 in earlier releases). Areas reminder: The first 6 storage area numbers are reserved. Currently, OpenEdge RDBMS uses only three of these storage areas (Areas 1, 3, and 6). The rest (Areas 2, 4, and 5) are reserved for future use and cannot be used for application data. Area 1 Control - contains a description of the database structure, and acts as the table of contents of a database. Area 3 Before-Image (BI) - contains BI transaction notes used to recover a database from a database crash. Area 6 Schema - contains the database schema. There is a new database startup parameter (i.e. on proserve, proutil, prostrct. etc) called “-maxAreas nnnnn” which can be used to conserve on shared memory usage (and assumes that areas will not be added online in the current session above the value of this parameter). If used, set this parameter to the highest area number needed in the session. If this parameter is not used, memory will be allocated for all 32000 areas, even though they may not all be in use. The amount of memory allocated for unused areas is minimal, but may be required on highly stressed systems. Note that this refers to areas; adding extents to areas does not impact this limit. How much memory is allocated per: In use area? Unused area? OpenEdge 10.1B RDBMS supports a maximum storage area number of 32000 (increased from a limit of 1000 in earlier releases). Areas reminder: The first 6 storage area numbers are reserved. Currently, OpenEdge RDBMS uses only three of these storage areas (Areas 1, 3, and 6). The rest (Areas 2, 4, and 5) are reserved for future use and cannot be used for application data. Area 1 Control - contains a description of the database structure, and acts as the table of contents of a database. Area 3 Before-Image (BI) - contains BI transaction notes used to recover a database from a database crash. Area 6 Schema - contains the database schema. There is a new database startup parameter (i.e. on proserve, proutil, prostrct. etc) called “-maxAreas nnnnn” which can be used to conserve on shared memory usage (and assumes that areas will not be added online in the current session above the value of this parameter). If used, set this parameter to the highest area number needed in the session. If this parameter is not used, memory will be allocated for all 32000 areas, even though they may not all be in use. The amount of memory allocated for unused areas is minimal, but may be required on highly stressed systems. Note that this refers to areas; adding extents to areas does not impact this limit. How much memory is allocated per: In use area? Unused area?

    13. DB-10: What’s New in the OpenEdge® RDBMS Shared Memory -B increased from 125 million to 1 billion for 64 bit Kernel settings must be set properly New startup parameter -shmsegsize n (max shm seg size in MB) MAXUMEM and SHMALL may also need updating OS SHMMAX… must be set properly -B 125,000,000 not changed (953 Gb at 8192) Some customers have large production systems with huge amounts of memory. Release 10.1B removes the limit on shared memory segment size by allowing the customer to specify the number of gigabytes per shared memory segment (the limit is presently 1GB per segment). This allows large databases to be effectively put into memory. Having fewer and larger shared memory segments should be more efficient because there are less system resources to manage (although some platforms, e.g. HP, suggest more segments rather than fewer is faster). OpenEdge 10.1B uses a new startup parameter “-shmsegsize” to allow the maximum shared memory segment size (in MB for 32-bit platforms, and GB for 64-bit platforms) as follows: For 32-bit platforms -shmsegsize will support the following values: 128, 256, 512, 1024, 2048 MB; default is 128MB For 64-bit platforms –shmsegsize will support the following values: 1, 2, 4, 8, 16, and 32 GB; default is 1GB These settings have been tested up to 16TB. The setting for the parameter is validated by actually attempting to allocate the memory; if it fails, an error will be raised, and the server will not start. On 32-bit platforms a maximum 4 GB of shared memory is addressable in 10.1B, so the number of segments multiplied by the segment size can never exceed 4 GB. On 64-bit platforms, the amount of shared memory that is addressable in 10.1B is much larger and is only limited by the computer physical resources. On most UNIX platforms the shared memory kernel settings control and limit the shared memory segment sizes and the number of shared memory segments that one process can have opened at a time. The following kernel setting effect shared memory allocations on most UNIX platforms: -SHMMAX = the maximum size (in bytes) of a single shared memory segment. -SHMMIN = the minimum size (in bytes) of a single shared memory segment. -SHMMNI = the number of shared memory identifiers. -SHMSEG = the maximum number of shared memory segments that can be attached by a process. The values of these kernel settings vary by platform and may not even be needed for one platform or another. For example, the SHMMAX kernel setting is not required on AIX, and the SHMSEG kernel setting is always 9 for 32-bit AIX. History In OpenEdge 10.1A and earlier, the method of implementing shared memory is built into the product at compile time and includes the following information: -There is one description for 32-bit platforms and one description for 64-bit platforms. -The maximum size of each shared memory segment (128 Mb for 32-bit platforms and 1Gb for 64-bit platforms) -The maximum number of shared memory segments This method of describing shared memory offers no flexibility, and neither can runtime decisions be made concerning shared memory. MAXUMEM and SHMALL may also need updating OS SHMMAX… must be set properly -B 125,000,000 not changed (953 Gb at 8192) Some customers have large production systems with huge amounts of memory. Release 10.1B removes the limit on shared memory segment size by allowing the customer to specify the number of gigabytes per shared memory segment (the limit is presently 1GB per segment). This allows large databases to be effectively put into memory. Having fewer and larger shared memory segments should be more efficient because there are less system resources to manage (although some platforms, e.g. HP, suggest more segments rather than fewer is faster). OpenEdge 10.1B uses a new startup parameter “-shmsegsize” to allow the maximum shared memory segment size (in MB for 32-bit platforms, and GB for 64-bit platforms) as follows: For 32-bit platforms -shmsegsize will support the following values: 128, 256, 512, 1024, 2048 MB; default is 128MB For 64-bit platforms –shmsegsize will support the following values: 1, 2, 4, 8, 16, and 32 GB; default is 1GB These settings have been tested up to 16TB. The setting for the parameter is validated by actually attempting to allocate the memory; if it fails, an error will be raised, and the server will not start. On 32-bit platforms a maximum 4 GB of shared memory is addressable in 10.1B, so the number of segments multiplied by the segment size can never exceed 4 GB. On 64-bit platforms, the amount of shared memory that is addressable in 10.1B is much larger and is only limited by the computer physical resources. On most UNIX platforms the shared memory kernel settings control and limit the shared memory segment sizes and the number of shared memory segments that one process can have opened at a time. The following kernel setting effect shared memory allocations on most UNIX platforms: -SHMMAX = the maximum size (in bytes) of a single shared memory segment. -SHMMIN = the minimum size (in bytes) of a single shared memory segment. -SHMMNI = the number of shared memory identifiers. -SHMSEG = the maximum number of shared memory segments that can be attached by a process. The values of these kernel settings vary by platform and may not even be needed for one platform or another. For example, the SHMMAX kernel setting is not required on AIX, and the SHMSEG kernel setting is always 9 for 32-bit AIX. History In OpenEdge 10.1A and earlier, the method of implementing shared memory is built into the product at compile time and includes the following information: -There is one description for 32-bit platforms and one description for 64-bit platforms. -The maximum size of each shared memory segment (128 Mb for 32-bit platforms and 1Gb for 64-bit platforms) -The maximum number of shared memory segments This method of describing shared memory offers no flexibility, and neither can runtime decisions be made concerning shared memory.

    14. DB-10: What’s New in the OpenEdge® RDBMS Large Index Key Entries Removes the 192* byte limit 3 bytes per Unicode character Allows 1,970* bytes of user data 4K and 8K block sizes only Includes temp-tables (-tmpbsize 1, 2, 4, 8) Temp tables and -Bt Large keys enabled by default on new databases Present index key width limits are at approximately 192 bytes. Applications built for deployment in multi-byte Unicode characters are subject to the 192 byte limit, but at 3 bytes per character, the application faces sever index size limits – not nearly enough for many uses. Release 10.1B extends the index key width limit substantially for databases with 4K or 8K block sizes, which support Large Index Key Entries of 1970 bytes of user data. Note this is on 4K and 8K databases only. Databases with block sizes of 1K or 2K remain limited to 192 bytes of user data. In order for large index key entries to be supported in TEMP-TABLES, the default block size of the temp table database has been changed from 1K to 4K; it can be set to 1, 2, 4, or 8, based on the -tmpbsize client startup parameter (if a 1 or 2K blocksize is used then large keys will not be supported and an error message will be displayed when a record insertion is attempted.). NOTE: Users may have to review their usage of –tmpbsize and –Bt (number of buffers for temp-tables, measure in BLOCKS). If using –Bt without –tmpbsize, more memory will be allocated since the block size has increased (from 1K to 4K) by default. Note also that these figures will vary with respect to bytes available to the application; since some are used internally for index component separation and special character escape information so it is NOT necessarily the number of characters that a user can store in an index entry. In other words, assuming ONE component and no special characters, the user can store 1970 bytes of data in a large index entry and 192 bytes in a small index entry. To summarize, the maximum index key size to be supported as part of the Large Key Entries project in OpenEdge 10.1b is: Small index keys will support 192 bytes of user data (databases with 1K & 2K block sizes) Large index Keys will support 1970 bytes of user data (databases with 4K and 8K block sizes). Present index key width limits are at approximately 192 bytes. Applications built for deployment in multi-byte Unicode characters are subject to the 192 byte limit, but at 3 bytes per character, the application faces sever index size limits – not nearly enough for many uses. Release 10.1B extends the index key width limit substantially for databases with 4K or 8K block sizes, which support Large Index Key Entries of 1970 bytes of user data. Note this is on 4K and 8K databases only. Databases with block sizes of 1K or 2K remain limited to 192 bytes of user data. In order for large index key entries to be supported in TEMP-TABLES, the default block size of the temp table database has been changed from 1K to 4K; it can be set to 1, 2, 4, or 8, based on the -tmpbsize client startup parameter (if a 1 or 2K blocksize is used then large keys will not be supported and an error message will be displayed when a record insertion is attempted.). NOTE: Users may have to review their usage of –tmpbsize and –Bt (number of buffers for temp-tables, measure in BLOCKS). If using –Bt without –tmpbsize, more memory will be allocated since the block size has increased (from 1K to 4K) by default. Note also that these figures will vary with respect to bytes available to the application; since some are used internally for index component separation and special character escape information so it is NOT necessarily the number of characters that a user can store in an index entry. In other words, assuming ONE component and no special characters, the user can store 1970 bytes of data in a large index entry and 192 bytes in a small index entry. To summarize, the maximum index key size to be supported as part of the Large Key Entries project in OpenEdge 10.1b is: Small index keys will support 192 bytes of user data (databases with 1K & 2K block sizes) Large index Keys will support 1970 bytes of user data (databases with 4K and 8K block sizes).

    15. DB-10: What’s New in the OpenEdge® RDBMS Enabling Large Index Key Entries Via proutil command line proutil <dbname> –C enablelargekeys Enabling using Data Admin / Data Dictionary Large keys support is enabled by default on newly created 10.1B databases (4K and 8K block sizes). Databases created in releases pre-101.B need large keys to be enabled manually by running using PROUTIL ENABLELARGEKEYS (new in 10.1B), or through the Data Admin (GUI)/ Dictionary tool (TTY) tool. Large keys support is enabled by default on newly created 10.1B databases (4K and 8K block sizes). Databases created in releases pre-101.B need large keys to be enabled manually by running using PROUTIL ENABLELARGEKEYS (new in 10.1B), or through the Data Admin (GUI)/ Dictionary tool (TTY) tool.

    16. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements

    17. DB-10: What’s New in the OpenEdge® RDBMS Migrating an OpenEdge Database to 10.1B Perform a backup Test your backup Disable Replication, AI, JTA, 2PC Truncate bi file Utilities available in <install>\bin\101dbutils 101dbutils/101a_dbutil <dbname> -C truncate bi This codebase cannot open a 32-bit DBKEY database if the BI file is not truncated. (13711) Use the 10.1A proutil utility to truncate the BI file. (13727) To migrate an existing (i.e. pre-10.1B OpenEdge database), it is necessary to do the following, assuming the features mentioned below are in use: Disable AI – not compatible across releases. Existing AI files will not be able to be applied against a 10.1B database. Disable JTA, 2PC, Replication Truncate the bi. Use 10.1A, or use the 10.1A utilities provided with 10.1B (see folder “bin\101dbutils” on the OpenEdge installation directory). To then migrate an existing OpenEdge 10 database simply open it with any 10.1B executable (which is NOT read-only!) and the database will become a 10.1B database. In the 10.1B release, any database opened is automatically marked as 64 bit enabled in the feature bit mask array in the masterblock. This identifies the database as having structural changes available in 10.1B (and forward) and as open protection against use by a lower release version in the OpenEdge 10 family of releases. Note that when migrating an existing database, some OpenEdge 10.1B features must be explicitly enabled (more details on this later). A database newly created with OpenEdge 10.1B database has all of the new features available by default. An OpenEdge 10.1B database is not accessible by a pre-10.1B clients directly; they can connect via a 10.1B server, but there are restrictions, depending upon the 10.1B features that are being used. There is a REVERT option (back to 10.1A) but it will not always be applicable – see later for more information. Note: The way a pre-10.1A client reacts when trying to open a 10.1B database may vary depending on the release; 10.1A01+ will read the database masterblock before opening the database for access, check the feature-list, and report a mismatch. Releases prior to this may open the database first, in which case the CLIENT will crash. To migrate an existing (i.e. pre-10.1B OpenEdge database), it is necessary to do the following, assuming the features mentioned below are in use: Disable AI – not compatible across releases. Existing AI files will not be able to be applied against a 10.1B database. Disable JTA, 2PC, Replication Truncate the bi. Use 10.1A, or use the 10.1A utilities provided with 10.1B (see folder “bin\101dbutils” on the OpenEdge installation directory). To then migrate an existing OpenEdge 10 database simply open it with any 10.1B executable (which is NOT read-only!) and the database will become a 10.1B database. In the 10.1B release, any database opened is automatically marked as 64 bit enabled in the feature bit mask array in the masterblock. This identifies the database as having structural changes available in 10.1B (and forward) and as open protection against use by a lower release version in the OpenEdge 10 family of releases. Note that when migrating an existing database, some OpenEdge 10.1B features must be explicitly enabled (more details on this later). A database newly created with OpenEdge 10.1B database has all of the new features available by default. An OpenEdge 10.1B database is not accessible by a pre-10.1B clients directly; they can connect via a 10.1B server, but there are restrictions, depending upon the 10.1B features that are being used. There is a REVERT option (back to 10.1A) but it will not always be applicable – see later for more information. Note: The way a pre-10.1A client reacts when trying to open a 10.1B database may vary depending on the release; 10.1A01+ will read the database masterblock before opening the database for access, check the feature-list, and report a mismatch. Releases prior to this may open the database first, in which case the CLIENT will crash.

    18. DB-10: What’s New in the OpenEdge® RDBMS Migrating an OpenEdge Database to 10.1B Open a pre-existing OpenEdge database with 10.1B No longer directly accessible by pre-10.1B executables Not all 10.1B features implicitly enabled PROMSGS For V9 Databases simply convert proutil <dbname> -C conv910 To migrate an existing (i.e. pre-10.1B OpenEdge database), it is necessary to do the following, assuming the features mentioned below are in use: Disable AI – not compatible across releases. Existing AI files will not be able to be applied against a 10.1B database. Disable JTA, 2PC, Replication Truncate the bi. Use 10.1A, or use the 10.1A utilities provided with 10.1B (see folder “bin\101dbutils” on the OpenEdge installation directory). To then migrate an existing OpenEdge 10 database simply open it with any 10.1B executable (which is NOT read-only!) and the database will become a 10.1B database. In the 10.1B release, any database opened is automatically marked as 64 bit enabled in the feature bit mask array in the masterblock. This identifies the database as having structural changes available in 10.1B (and forward) and as open protection against use by a lower release version in the OpenEdge 10 family of releases. Note that when migrating an existing database, some OpenEdge 10.1B features must be explicitly enabled (more details on this later). A database newly created with OpenEdge 10.1B database has all of the new features available by default. An OpenEdge 10.1B database is not accessible by a pre-10.1B clients directly; they can connect via a 10.1B server, but there are restrictions, depending upon the 10.1B features that are being used. There is a REVERT option (back to 10.1A) but it will not always be applicable – see later for more information. Note: The way a pre-10.1A client reacts when trying to open a 10.1B database may vary depending on the release; 10.1A01+ will read the database masterblock before opening the database for access, check the feature-list, and report a mismatch. Releases prior to this may open the database first, in which case the CLIENT will crash. To migrate an existing (i.e. pre-10.1B OpenEdge database), it is necessary to do the following, assuming the features mentioned below are in use: Disable AI – not compatible across releases. Existing AI files will not be able to be applied against a 10.1B database. Disable JTA, 2PC, Replication Truncate the bi. Use 10.1A, or use the 10.1A utilities provided with 10.1B (see folder “bin\101dbutils” on the OpenEdge installation directory). To then migrate an existing OpenEdge 10 database simply open it with any 10.1B executable (which is NOT read-only!) and the database will become a 10.1B database. In the 10.1B release, any database opened is automatically marked as 64 bit enabled in the feature bit mask array in the masterblock. This identifies the database as having structural changes available in 10.1B (and forward) and as open protection against use by a lower release version in the OpenEdge 10 family of releases. Note that when migrating an existing database, some OpenEdge 10.1B features must be explicitly enabled (more details on this later). A database newly created with OpenEdge 10.1B database has all of the new features available by default. An OpenEdge 10.1B database is not accessible by a pre-10.1B clients directly; they can connect via a 10.1B server, but there are restrictions, depending upon the 10.1B features that are being used. There is a REVERT option (back to 10.1A) but it will not always be applicable – see later for more information. Note: The way a pre-10.1A client reacts when trying to open a 10.1B database may vary depending on the release; 10.1A01+ will read the database masterblock before opening the database for access, check the feature-list, and report a mismatch. Releases prior to this may open the database first, in which case the CLIENT will crash.

    19. DB-10: What’s New in the OpenEdge® RDBMS Release Compatibility Client Server Remote Connection through a server allowed Can read, but not store (update/create), 64-bit fields Client disconnected if it tries to access dbkey or rowid > 31 bits (2 billion) Sequence > 31 bits Must re-compile if INT64 in use (otherwise OK) CRC changes prevent old r-code running Direct Connect Error Message An invalid feature 9 has been encountered in the database's Enabled feature list. (11727) The list of enabled features in database x contains features that are not recognized by this codebase (11810) All previous OpenEdge 10 clients are allowed to connect to a 10.1B OpenEdge Server, but they are never returned 64-bit DBKEYs or sequence data. As soon as one of these older clients try to access data that is beyond the 2G boundary, i.e. addressed via a 32-bit dbkey, the client is disconnected with reason code 44 and a message is logged to the database log. The following conditions will cause an old client to be disconnected from the database: Server attempts to return 64-bit DBKEY when reading or creating a record Server attempts to return a 64-bit sequence Server attempts to return a sequence value > 2^31 Server attempts to return a 64-bit block number or root block for an object Server attempt to return a blob or clob that is located above the 32-bit address space There is a CRC change for any table with a 64-bit column, whether old or new, which prevents existing r-code from running on it.  New attempts to compile or do anything with the new column vary depending on the release, but all are safe. The 10.1A client has a read-only capability for 64-bit columns, so it can compile against a 64-bit table and read it.  But the values coming back cannot be stored back or in any local variable or temp-table or used in any expression or anything.  It is essentially read-only. A version 9.x client gets a fatal message when it sees the new data-type, and existing r-code will fail due to CRC errors.All previous OpenEdge 10 clients are allowed to connect to a 10.1B OpenEdge Server, but they are never returned 64-bit DBKEYs or sequence data. As soon as one of these older clients try to access data that is beyond the 2G boundary, i.e. addressed via a 32-bit dbkey, the client is disconnected with reason code 44 and a message is logged to the database log. The following conditions will cause an old client to be disconnected from the database: Server attempts to return 64-bit DBKEY when reading or creating a record Server attempts to return a 64-bit sequence Server attempts to return a sequence value > 2^31 Server attempts to return a 64-bit block number or root block for an object Server attempt to return a blob or clob that is located above the 32-bit address space There is a CRC change for any table with a 64-bit column, whether old or new, which prevents existing r-code from running on it.  New attempts to compile or do anything with the new column vary depending on the release, but all are safe. The 10.1A client has a read-only capability for 64-bit columns, so it can compile against a 64-bit table and read it.  But the values coming back cannot be stored back or in any local variable or temp-table or used in any expression or anything.  It is essentially read-only. A version 9.x client gets a fatal message when it sees the new data-type, and existing r-code will fail due to CRC errors.

    20. DB-10: What’s New in the OpenEdge® RDBMS Backup Compatibility Backup/restore incompatibilities Backup 10.1a DB via 101a_dbutil Restore 10.1a backup if no recovery log data Restored database is always 10.1b Binary Load Compatible if no large keys Compatible if < 2 billions rows Compatible if no int64 datatypes (CRC) All previous OpenEdge 10 clients are allowed to connect to a 10.1B OpenEdge Server, but they are never returned 64-bit DBKEYs or sequence data. As soon as one of these older clients try to access data that is beyond the 2G boundary, i.e. addressed via a 32-bit dbkey, the client is disconnected with reason code 44 and a message is logged to the database log. The following conditions will cause an old client to be disconnected from the database: Server attempts to return 64-bit DBKEY when reading or creating a record Server attempts to return a 64-bit sequence Server attempts to return a sequence value > 2^31 Server attempts to return a 64-bit block number or root block for an object Server attempt to return a blob or clob that is located above the 32-bit address space There is a CRC change for any table with a 64-bit column, whether old or new, which prevents existing r-code from running on it.  New attempts to compile or do anything with the new column vary depending on the release, but all are safe. The 10.1A client has a read-only capability for 64-bit columns, so it can compile against a 64-bit table and read it.  But the values coming back cannot be stored back or in any local variable or temp-table or used in any expression or anything.  It is essentially read-only. A version 9.x client gets a fatal message when it sees the new data-type, and existing r-code will fail due to CRC errors.All previous OpenEdge 10 clients are allowed to connect to a 10.1B OpenEdge Server, but they are never returned 64-bit DBKEYs or sequence data. As soon as one of these older clients try to access data that is beyond the 2G boundary, i.e. addressed via a 32-bit dbkey, the client is disconnected with reason code 44 and a message is logged to the database log. The following conditions will cause an old client to be disconnected from the database: Server attempts to return 64-bit DBKEY when reading or creating a record Server attempts to return a 64-bit sequence Server attempts to return a sequence value > 2^31 Server attempts to return a 64-bit block number or root block for an object Server attempt to return a blob or clob that is located above the 32-bit address space There is a CRC change for any table with a 64-bit column, whether old or new, which prevents existing r-code from running on it.  New attempts to compile or do anything with the new column vary depending on the release, but all are safe. The 10.1A client has a read-only capability for 64-bit columns, so it can compile against a 64-bit table and read it.  But the values coming back cannot be stored back or in any local variable or temp-table or used in any expression or anything.  It is essentially read-only. A version 9.x client gets a fatal message when it sees the new data-type, and existing r-code will fail due to CRC errors.

    21. DB-10: What’s New in the OpenEdge® RDBMS 10.1B Large Database Features PROUTIL dbname –C describe Shows the currently enabled features for a given 10.1B+ database. The details displayed are largely self explanatory. Active - The value of this column may be “Yes” or “No”. Details - If there is any special information associated with the feature it is shown in this column. For example, if OpenEdge Replication is enabled for the database the type of database is shown in this column. A 10.1A database, that has not been accessed with a 10.1B executable, will report the following, when PROUTIL DESCRIBE is used: Database Features No special Database Capabilities enabledPROUTIL dbname –C describe Shows the currently enabled features for a given 10.1B+ database. The details displayed are largely self explanatory. Active - The value of this column may be “Yes” or “No”. Details - If there is any special information associated with the feature it is shown in this column. For example, if OpenEdge Replication is enabled for the database the type of database is shown in this column. A 10.1A database, that has not been accessed with a 10.1B executable, will report the following, when PROUTIL DESCRIBE is used: Database Features No special Database Capabilities enabled

    22. DB-10: What’s New in the OpenEdge® RDBMS 10.1B Large Database Features PROUTIL dbname –C describe Shows the currently enabled features for a given 10.1B+ database. The details displayed are largely self explanatory. Active - The value of this column may be “Yes” or “No”. Details - If there is any special information associated with the feature it is shown in this column. For example, if OpenEdge Replication is enabled for the database the type of database is shown in this column. A 10.1A database, that has not been accessed with a 10.1B executable, will report the following, when PROUTIL DESCRIBE is used: Database Features No special Database Capabilities enabledPROUTIL dbname –C describe Shows the currently enabled features for a given 10.1B+ database. The details displayed are largely self explanatory. Active - The value of this column may be “Yes” or “No”. Details - If there is any special information associated with the feature it is shown in this column. For example, if OpenEdge Replication is enabled for the database the type of database is shown in this column. A 10.1A database, that has not been accessed with a 10.1B executable, will report the following, when PROUTIL DESCRIBE is used: Database Features No special Database Capabilities enabled

    23. DB-10: What’s New in the OpenEdge® RDBMS

    24. DB-10: What’s New in the OpenEdge® RDBMS Reverting a Database proutil <dbname> –C revert Reverts a migrated database back to 10.1A Single user mode only Back up the database before starting!! Changes internal structures back 10.1A Disables 2PC, JTA, Replication, AI Truncates the .bi file If the PROUTIL REVERT determines that the database is a candidate for the revert process, it prompts for confirmation that a backup has been performed. Assuming an affirmative response is received, the process to physically revert the database to the 10.1A structure the begins. PROUTIL REVERT removes the 101B VSTs and attempts to replace them with 101A VST, if we can execute 101A _dbutil (in the <install-dir>/bin/101dbutils directory). If this utility cannot be executed, then a warning will be given to the screen and .lg informing the user they must use the 101A PROUTIL UPDATEVST command. Disable 2phase commit, JTA, replication, ai-management and after-imaging, if they are enabled. Remove all feature bit mask settings for 101B features. Truncate the bi file for the database. If the PROUTIL REVERT determines that the database is a candidate for the revert process, it prompts for confirmation that a backup has been performed. Assuming an affirmative response is received, the process to physically revert the database to the 10.1A structure the begins. PROUTIL REVERT removes the 101B VSTs and attempts to replace them with 101A VST, if we can execute 101A _dbutil (in the <install-dir>/bin/101dbutils directory). If this utility cannot be executed, then a warning will be given to the screen and .lg informing the user they must use the 101A PROUTIL UPDATEVST command. Disable 2phase commit, JTA, replication, ai-management and after-imaging, if they are enabled. Remove all feature bit mask settings for 101B features. Truncate the bi file for the database.

    25. DB-10: What’s New in the OpenEdge® RDBMS Sometimes Its Tough To Go Back Cannot revert a database if INT64 data type in use Any Type II storage area has HWM > 32 bits Large index keys enabled Large sequences enabled Highest database area number exceeds 1,000 PROUTIL REVERT performs the following checks to determine whether a 10.1B database can be reverted. If any of these checks fail, the database will remain un-reverted. On UNIX, the user must have necessary UNIX file level permissions to run the utility; this is the only security. If the schema has been changed to use the INT64 database type, the utility cannot revert the database. If a Type 2 storage area uses 64-bit blocks (high water mark), the utility cannot revert the database. If large index key entries have been enabled for the database, the utility cannot revert the database. If large (64-bit) sequences have been enabled for the database, the utility cannot revert the database (even if the sequence values are still within the 32-bit range). If any *LOBS in the database have segments using 64 bit block values, the database cannot be reverted by the utility. If the area number for one or more areas exceeds 1000 the database cannot be reverted. Note that the utility does not check every record or index block to confirm the large database features are actually in use; this would take far too long on a large database.PROUTIL REVERT performs the following checks to determine whether a 10.1B database can be reverted. If any of these checks fail, the database will remain un-reverted. On UNIX, the user must have necessary UNIX file level permissions to run the utility; this is the only security. If the schema has been changed to use the INT64 database type, the utility cannot revert the database. If a Type 2 storage area uses 64-bit blocks (high water mark), the utility cannot revert the database. If large index key entries have been enabled for the database, the utility cannot revert the database. If large (64-bit) sequences have been enabled for the database, the utility cannot revert the database (even if the sequence values are still within the 32-bit range). If any *LOBS in the database have segments using 64 bit block values, the database cannot be reverted by the utility. If the area number for one or more areas exceeds 1000 the database cannot be reverted. Note that the utility does not check every record or index block to confirm the large database features are actually in use; this would take far too long on a large database.

    26. DB-10: What’s New in the OpenEdge® RDBMS Manual Revert Not always possible to re-load Can’t load large key entries If addressing above 10.1A limits ROWIDs, Sequences, INT64, etc… Would need a data conversion routine Must data D&L after massage of data/structure Binary D&L checks CRC! On occasion, it has been necessary for customers who have upgraded to a higher release in a database version to revert back to using an earlier release of Progress products. In the case of 10.1B an explicit action must be taken to ready the database for use with OpenEdge 10.1A or previous releases – the PROUTIL REVERT utility. The revert utility analyzes the database to see if it is a candidate for use with lower versions of the executables. If, after analysis, the database qualifies it will be reverted back to a 10.1A state. Note that it may not always be possible to REVERT a database; it may not even be possible to dump and load if the database has started to address blocks beyond the 31 bit limit. The utility runs offline/single user mode only, since this guarantees that outstanding 2phase commit or JTA operations have been resolved/committed or aborted. On occasion, it has been necessary for customers who have upgraded to a higher release in a database version to revert back to using an earlier release of Progress products. In the case of 10.1B an explicit action must be taken to ready the database for use with OpenEdge 10.1A or previous releases – the PROUTIL REVERT utility. The revert utility analyzes the database to see if it is a candidate for use with lower versions of the executables. If, after analysis, the database qualifies it will be reverted back to a 10.1A state. Note that it may not always be possible to REVERT a database; it may not even be possible to dump and load if the database has started to address blocks beyond the 31 bit limit. The utility runs offline/single user mode only, since this guarantees that outstanding 2phase commit or JTA operations have been resolved/committed or aborted.

    27. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements Lets talk about the large database features introduced in OpenEdge 10.1B…Lets talk about the large database features introduced in OpenEdge 10.1B…

    28. DB-10: What’s New in the OpenEdge® RDBMS User I/O By Table and Index Monitor table and index activity per user What I/O are users doing? Creates, reads, updates, deletes How is a query performing? VSTs included in 10.1B databases Update schema of pre-existing databases proutil <dbname> –C updatevst Table I/O per user: _UserTableStat Index I/O per user: _UserIndexStat OpenEdge 10.1B features 2 new VSTs to provide insight in to what user I/O is occurring at the table and index level, respectively. Prior to 10.1B it was only possible to see what total I/O occurring on a range of tables and indexes (through the use of VSTs). There are no plans to add this information to PROMON. This OpenEdge 10.1B feature gives more insight into what each user is doing, and so how they may be affecting the rest of the application at runtime and so could be useful as a diagnostic tool at run time. The feature could also be used to give insight into how efficient queries are in terms of table and index access, providing diagnostic query information at development time as to whether a query needs refining. These new VST tables are included with the schema of newly created 10.1B databases, or they can be added to the schema of existing database uses PROUTIL UPDATEVST. Schema: _UserTableStat (-16430) Field Name Type Format Label _UserTableStat-Id INTFLD ->>>>>>>>>9 UserTableStat-Id _UserTableStat-Conn INTFLD ->>>>>>>>>9 Connection _UserTableStat-Num INTFLD ->>>>9 Table # _UserTableStat-read INTFLD ->>>>>>>>>9 read _UserTableStat-update INTFLD ->>>>>>>>>9 update _UserTableStat-create INTFLD ->>>>>>>>>9 create _UserTableStat-delete INTFLD ->>>>>>>>>9 delete Table Name: _UserIndexStat (-16431) Field Name Type Format Label _UserIndexStat-Id INTFLD ->>>>>>>>>9 UserIndexStat-Id _UserIndexStat-Conn INTFLD ->>>>>>>>>9 Connection _UserIndexStat-Num INTFLD ->>>>9 Index # _UserIndexStat-read INTFLD ->>>>>>>>>9 read _UserIndexStat-create INTFLD ->>>>>>>>>9 create _UserIndexStat-delete INTFLD ->>>>>>>>>9 delete _UserIndexStat-split INTFLD ->>>>>>>>>9 split _UserIndexStat-blockdelete INTFLD ->>>>>>>>>9 blockdeleteOpenEdge 10.1B features 2 new VSTs to provide insight in to what user I/O is occurring at the table and index level, respectively. Prior to 10.1B it was only possible to see what total I/O occurring on a range of tables and indexes (through the use of VSTs). There are no plans to add this information to PROMON. This OpenEdge 10.1B feature gives more insight into what each user is doing, and so how they may be affecting the rest of the application at runtime and so could be useful as a diagnostic tool at run time. The feature could also be used to give insight into how efficient queries are in terms of table and index access, providing diagnostic query information at development time as to whether a query needs refining. These new VST tables are included with the schema of newly created 10.1B databases, or they can be added to the schema of existing database uses PROUTIL UPDATEVST. Schema: _UserTableStat (-16430) Field Name Type Format Label _UserTableStat-Id INTFLD ->>>>>>>>>9 UserTableStat-Id _UserTableStat-Conn INTFLD ->>>>>>>>>9 Connection _UserTableStat-Num INTFLD ->>>>9 Table # _UserTableStat-read INTFLD ->>>>>>>>>9 read _UserTableStat-update INTFLD ->>>>>>>>>9 update _UserTableStat-create INTFLD ->>>>>>>>>9 create _UserTableStat-delete INTFLD ->>>>>>>>>9 delete Table Name: _UserIndexStat (-16431) Field Name Type Format Label _UserIndexStat-Id INTFLD ->>>>>>>>>9 UserIndexStat-Id _UserIndexStat-Conn INTFLD ->>>>>>>>>9 Connection _UserIndexStat-Num INTFLD ->>>>9 Index # _UserIndexStat-read INTFLD ->>>>>>>>>9 read _UserIndexStat-create INTFLD ->>>>>>>>>9 create _UserIndexStat-delete INTFLD ->>>>>>>>>9 delete _UserIndexStat-split INTFLD ->>>>>>>>>9 split _UserIndexStat-blockdelete INTFLD ->>>>>>>>>9 blockdelete

    29. DB-10: What’s New in the OpenEdge® RDBMS User I/O by Table and Index: Availability VST record access from ABL and SQL Available in R&D section of promon OpenEdge Management

    30. DB-10: What’s New in the OpenEdge® RDBMS Managing User I/O By Table and Index Shared Memory Usage: 32 bytes per table 40 bytes per index Control with startup parameters Can only change Start parameter at runtime VST _StatBase These are not new parameters!! The range of tables and indices monitored is defined by the server startup parameters: Tables: -basetable n -tablerangesize n Indices: -baseindex n -indexrangesize n Note that these are not new parameters, they are the same parameters used to gather table (_TableStat) and index (_IndexStat) VST statistics in previous OpenEdge 10 releases. They continue to be used to support gathering those statistics. The base table and index can be changed at run-time by updating the _StatBase VST. The range cannot be changed because the backing shared memory needs to be pre-allocated. This is not new functionality. The shared memory cost is 32 bytes per table per user + 40 bytes per index per user based on the size of the table and index ranges specified at startup. Example: 100 users monitoring 100 tables and 100 indexes: 100 (users) * 100 (tables) * 32 (bytes per table) + 100 (users) * 100 (indexes) * 40 (bytes per index) = 720,000 bytes (about 0.5 MB of shared memory) The range of tables and indices monitored is defined by the server startup parameters: Tables: -basetable n -tablerangesize n Indices: -baseindex n -indexrangesize n Note that these are not new parameters, they are the same parameters used to gather table (_TableStat) and index (_IndexStat) VST statistics in previous OpenEdge 10 releases. They continue to be used to support gathering those statistics. The base table and index can be changed at run-time by updating the _StatBase VST. The range cannot be changed because the backing shared memory needs to be pre-allocated. This is not new functionality. The shared memory cost is 32 bytes per table per user + 40 bytes per index per user based on the size of the table and index ranges specified at startup. Example: 100 users monitoring 100 tables and 100 indexes: 100 (users) * 100 (tables) * 32 (bytes per table) + 100 (users) * 100 (indexes) * 40 (bytes per index) = 720,000 bytes (about 0.5 MB of shared memory)

    31. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements Lets talk about the large database features introduced in OpenEdge 10.1B…Lets talk about the large database features introduced in OpenEdge 10.1B…

    32. DB-10: What’s New in the OpenEdge® RDBMS Online Maintenance Enable ai online probkup online <dbname> x.bak enableai Enable ai management online probkup online <dbname> x.bak enableai [ enableaiarchiver -aiarcdir dirlist] rfutil x -C aimage aioff introduced in 10.0a

    33. DB-10: What’s New in the OpenEdge® RDBMS Online Maintenance prostrct addonline <dbname> newarea.st Add on line allows new areas when ai enabled. Prompt allows you to continue Rolling forward… SYSTEM ERROR: Area 21 was not found in the database. (13708) Please add the missing area according to the structure file in the source database, then use the roll forward retry utility to continue the operation. (13709)

    34. DB-10: What’s New in the OpenEdge® RDBMS Miscellaneous Defaults Database size changed from 1K to 8K (Unix) Temp table block size defaults to 4K (-tmpbsize) Backout Buffer Enhancement – Part II Improved rollback for OLTP Enhanced Record Lock Release Improves logout & transaction end (-lkrela disables) Binary Load w/Build Indexes Improved Performance Uses same index rebuild code

    35. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements Lets talk about the large database features introduced in OpenEdge 10.1B…Lets talk about the large database features introduced in OpenEdge 10.1B…

    36. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements Wide table support Multi database query Query extensions Availability features Lets talk about the large database features introduced in OpenEdge 10.1B…Lets talk about the large database features introduced in OpenEdge 10.1B…

    37. DB-10: What’s New in the OpenEdge® RDBMS Wide Table Support Table limits Removal of 500 column / field access limit New limit is effectively 5000 columns Limited by 32K maximum record size Increased compatibility with ABL Equivalent or better to other SQL vendors With Release 10.1B, tables can be defined with any number of fields defined, up to the 32K limit record size limit, which is practically 5000 columns. Releases prior to 10.1B were limited to 500 columns per table for SQL tables; columns beyond the 500 limit are not accessible by OpenEdge SQL. In 10.1B, this limit is removed, in the interests of compatibility with the ABL, and equivalent or better wide table support than other SQL vendors. With Release 10.1B, tables can be defined with any number of fields defined, up to the 32K limit record size limit, which is practically 5000 columns. Releases prior to 10.1B were limited to 500 columns per table for SQL tables; columns beyond the 500 limit are not accessible by OpenEdge SQL. In 10.1B, this limit is removed, in the interests of compatibility with the ABL, and equivalent or better wide table support than other SQL vendors.

    38. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements Wide table support Multi database query Query extensions Availability features Lets talk about the large database features introduced in OpenEdge 10.1B…Lets talk about the large database features introduced in OpenEdge 10.1B…

    39. DB-10: What’s New in the OpenEdge® RDBMS Multi-Database Query Some reporting tools – for example Crystal Reports – accept only SELECT statements. In such a case it is not possible to issue the CONNECT AS CATALOG statements, and an alternative connection/disconnection methodology is required. Therefore, it is possible to connect to auxiliary databases when connecting to the Primary database. This is achieved through the use of a properties file, and modification to the primary database connection parameters, whether JDBC or ODBC. The properties file must be in the same directory as the database; and must be named <dbname>.oesql.properties. Where <dbname> is the database name without the “.db” extension. For example: Sports2000.oesql.properties The properties file contains the following sections: [sql-configuration] configuration-names-list=<configname> [configuration.<configname>] database-id-list=<DB_config1>[, <DB_Confign> …] [database.<DB_config1>] Name=dbname Catalog=Catalog-name Location=[<FullPath>]<dbname> … To invoke the use of the properties file, append “[-mdbq:<configname>]” to the database name in the JDBC connection string/ODBC DSN. If the properties file is not found, or contains invalid syntax, the primary database will be connected to without the auxiliary databases. The auxiliary databases are disconnected when the primary database is disconnected. Some reporting tools – for example Crystal Reports – accept only SELECT statements. In such a case it is not possible to issue the CONNECT AS CATALOG statements, and an alternative connection/disconnection methodology is required. Therefore, it is possible to connect to auxiliary databases when connecting to the Primary database. This is achieved through the use of a properties file, and modification to the primary database connection parameters, whether JDBC or ODBC. The properties file must be in the same directory as the database; and must be named <dbname>.oesql.properties. Where <dbname> is the database name without the “.db” extension. For example: Sports2000.oesql.properties The properties file contains the following sections: [sql-configuration] configuration-names-list=<configname> [configuration.<configname>] database-id-list=<DB_config1>[, <DB_Confign> …] [database.<DB_config1>] Name=dbname Catalog=Catalog-name Location=[<FullPath>]<dbname> … To invoke the use of the properties file, append “[-mdbq:<configname>]” to the database name in the JDBC connection string/ODBC DSN. If the properties file is not found, or contains invalid syntax, the primary database will be connected to without the auxiliary databases. The auxiliary databases are disconnected when the primary database is disconnected.

    40. DB-10: What’s New in the OpenEdge® RDBMS Connecting to an Auxiliary Database Catalog – an alias for a database Connecting to an auxiliary database Must be on same host as Primary database Disconnecting a catalog There is no hard limit to the number of auxiliary databases that can be connected, other than operating system resources. So long as OpenEdge 10.1B can get the OS resources, memory, shared memory, file descriptors, to attach to those external resources, we will do it. The CONNECT AS CATALOG statement is used to provide access to multiple databases on a single client connection to an SQL server. Once connected, the catalog name for an auxiliary database is used in SQL statements to qualify schema, table, and column access. The catalog name is visible, and usable, only in the client-server session in which it is defined. The catalog name of the primary database is the database name by which the database is started (e.g., on the “proserve” command line), without the pathname. Syntax: CONNECT ’database/path’ AS CATALOG catalog_name; ‘database/path’ = The full path to database directory and database name. This must be contained within quotes. catalog_name = Catalog name. Must be in the form of an SQL identifier, up to 32 bytes in length. An alias for the database, used to qualify schema, table, column references Establishes a read-only connection to a fully path’d database name in quotes You can remove a connection to an auxiliary read-only database using the DISCONNECT CATALOG statement. Any user can execute this statement. Syntax: DISCONNECT CATALOG catalog_name; If an auxiliary database catalog is set as the default catalog, disconnect of that auxiliary catalog does not change the name of the default catalog. There is no hard limit to the number of auxiliary databases that can be connected, other than operating system resources. So long as OpenEdge 10.1B can get the OS resources, memory, shared memory, file descriptors, to attach to those external resources, we will do it. The CONNECT AS CATALOG statement is used to provide access to multiple databases on a single client connection to an SQL server. Once connected, the catalog name for an auxiliary database is used in SQL statements to qualify schema, table, and column access. The catalog name is visible, and usable, only in the client-server session in which it is defined. The catalog name of the primary database is the database name by which the database is started (e.g., on the “proserve” command line), without the pathname. Syntax: CONNECT ’database/path’ AS CATALOG catalog_name; ‘database/path’ = The full path to database directory and database name. This must be contained within quotes. catalog_name = Catalog name. Must be in the form of an SQL identifier, up to 32 bytes in length. An alias for the database, used to qualify schema, table, column references Establishes a read-only connection to a fully path’d database name in quotes You can remove a connection to an auxiliary read-only database using the DISCONNECT CATALOG statement. Any user can execute this statement. Syntax: DISCONNECT CATALOG catalog_name; If an auxiliary database catalog is set as the default catalog, disconnect of that auxiliary catalog does not change the name of the default catalog.

    41. DB-10: What’s New in the OpenEdge® RDBMS Four level naming convention Example Applies to schema, tables, columns, stored procedures 4 part naming – multi-database query When connected to auxiliary databases, use the catalog name with OpenEdge SQL query commands to qualify the database to which the query applies. The Primary database is automatically assigned a catalog name the same as the database name. OpenEdge Releases prior to 10.1B support 2 and 3 level names, for schema, tables (and stored procedures) and columns respectively, when referenced. Release 10.1B extends this naming to support 3 and 4 level naming (as defined in the SQL standard) with the addition of the catalog name naming-level. Names, according to the standard SQL syntax, are composed of 1, 2, 3, or 4 components, as described below: Catalog – a catalog is a named collection of schemas. In OpenEdge, a catalog logically corresponds to an Open Edge database. Schema – a named collection of table and other database objects. Tables Columns Tables have up to 3 components or levels: catalog.schema.tablename. Columns have up to 4 components or levels: catalog.schema.table.column-name. Similarly, a stored procedure can be referred to as “catalog.schema.proc-name”. This naming system is supported by many tools, such as Crystal, by competitors such as Microsoft SQL Server, and by related standards such as ODBC and JDBC. Catalog names cannot be used on DDL statements, such as ALTER or CREATE. Catalog names cannot be used with sequences. Synonyms: A public synonym, used without a qualifier, must exist in the set of synonyms defined for the primary database. A public synonym may be qualified with a catalog name, with or without a schema name, in which case the synonym must exist in the set of synonyms defined for the specified catalog. A private synonym must always be qualified by at least the schema name of the schema where that synonym exists. A private synonym, used with only a schema qualifier, must exist in the set of synonyms defined for the primary database. Stored procedures: A stored procedure in an auxiliary catalog can be called by explicitly qualifying the procedure name by the catalog name. The SQL statements executed from within a stored procedure are interpreted in light of the multiple catalogs active. Consider a stored procedure “order_parts” stored in an auxiliary database. Tables referenced from statements in “order_parts”, if they are not qualified by a catalog name, will refer to tables in the default catalog (the Primary database) and NOT the auxiliary database. This may not be what was intended. To avoid such problems, fully qualify table names in stored procedures. call mysports.pub.order_parts(cItemCode) When connected to auxiliary databases, use the catalog name with OpenEdge SQL query commands to qualify the database to which the query applies. The Primary database is automatically assigned a catalog name the same as the database name. OpenEdge Releases prior to 10.1B support 2 and 3 level names, for schema, tables (and stored procedures) and columns respectively, when referenced. Release 10.1B extends this naming to support 3 and 4 level naming (as defined in the SQL standard) with the addition of the catalog name naming-level. Names, according to the standard SQL syntax, are composed of 1, 2, 3, or 4 components, as described below: Catalog – a catalog is a named collection of schemas. In OpenEdge, a catalog logically corresponds to an Open Edge database. Schema – a named collection of table and other database objects. Tables Columns Tables have up to 3 components or levels: catalog.schema.tablename. Columns have up to 4 components or levels: catalog.schema.table.column-name. Similarly, a stored procedure can be referred to as “catalog.schema.proc-name”. This naming system is supported by many tools, such as Crystal, by competitors such as Microsoft SQL Server, and by related standards such as ODBC and JDBC. Catalog names cannot be used on DDL statements, such as ALTER or CREATE. Catalog names cannot be used with sequences. Synonyms: A public synonym, used without a qualifier, must exist in the set of synonyms defined for the primary database. A public synonym may be qualified with a catalog name, with or without a schema name, in which case the synonym must exist in the set of synonyms defined for the specified catalog. A private synonym must always be qualified by at least the schema name of the schema where that synonym exists. A private synonym, used with only a schema qualifier, must exist in the set of synonyms defined for the primary database. Stored procedures: A stored procedure in an auxiliary catalog can be called by explicitly qualifying the procedure name by the catalog name. The SQL statements executed from within a stored procedure are interpreted in light of the multiple catalogs active. Consider a stored procedure “order_parts” stored in an auxiliary database. Tables referenced from statements in “order_parts”, if they are not qualified by a catalog name, will refer to tables in the default catalog (the Primary database) and NOT the auxiliary database. This may not be what was intended. To avoid such problems, fully qualify table names in stored procedures. call mysports.pub.order_parts(cItemCode)

    42. DB-10: What’s New in the OpenEdge® RDBMS Multi-Database : automatic connections Some reporting tools – for example Crystal Reports – accept only SELECT statements. In such a case it is not possible to issue the CONNECT AS CATALOG statements, and an alternative connection/disconnection methodology is required. Therefore, it is possible to connect to auxiliary databases when connecting to the Primary database. This is achieved through the use of a properties file, and modification to the primary database connection parameters, whether JDBC or ODBC. The properties file must be in the same directory as the database; and must be named <dbname>.oesql.properties. Where <dbname> is the database name without the “.db” extension. For example: Sports2000.oesql.properties The properties file contains the following sections: [sql-configuration] configuration-names-list=<configname> [configuration.<configname>] database-id-list=<DB_config1>[, <DB_Confign> …] [database.<DB_config1>] Name=dbname Catalog=Catalog-name Location=[<FullPath>]<dbname> … To invoke the use of the properties file, append “[-mdbq:<configname>]” to the database name in the JDBC connection string/ODBC DSN. If the properties file is not found, or contains invalid syntax, the primary database will be connected to without the auxiliary databases. The auxiliary databases are disconnected when the primary database is disconnected. Some reporting tools – for example Crystal Reports – accept only SELECT statements. In such a case it is not possible to issue the CONNECT AS CATALOG statements, and an alternative connection/disconnection methodology is required. Therefore, it is possible to connect to auxiliary databases when connecting to the Primary database. This is achieved through the use of a properties file, and modification to the primary database connection parameters, whether JDBC or ODBC. The properties file must be in the same directory as the database; and must be named <dbname>.oesql.properties. Where <dbname> is the database name without the “.db” extension. For example: Sports2000.oesql.properties The properties file contains the following sections: [sql-configuration] configuration-names-list=<configname> [configuration.<configname>] database-id-list=<DB_config1>[, <DB_Confign> …] [database.<DB_config1>] Name=dbname Catalog=Catalog-name Location=[<FullPath>]<dbname> … To invoke the use of the properties file, append “[-mdbq:<configname>]” to the database name in the JDBC connection string/ODBC DSN. If the properties file is not found, or contains invalid syntax, the primary database will be connected to without the auxiliary databases. The auxiliary databases are disconnected when the primary database is disconnected.

    43. DB-10: What’s New in the OpenEdge® RDBMS jdbc:datadirect:openedge://localhost:6748;databaseName=sports2000 Multi-DB configuration:JDBC -DBNavigator Build showing URL with single and mdbq Build showing URL with single and mdbq

    44. DB-10: What’s New in the OpenEdge® RDBMS ODBC – Multi-DB configuration Build showing single and then MDBQ connectionsBuild showing single and then MDBQ connections

    45. DB-10: What’s New in the OpenEdge® RDBMS Crystal – with Multi-DB Query

    46. DB-10: What’s New in the OpenEdge® RDBMS Connection types Primary or single database connections SQSV (OpenEdge SQL Server ) Main OpenEdge SQL server process connection to storage engine REMC (Remote Client) OpenEdge SQL client thread for each connection to the primary database Auxiliary connections SQFA (SQL Federated Agent) Main OpenEdge SQL connection to the Auxiliary database SQFC (SQL Federated Client) OpenEdge SQL client thread for each connection to the auxiliary database Primary or single database connections SQSV (OpenEdge SQL Server ) Main OpenEdge SQL server process connection to storage engine REMC (Remote Client) OpenEdge SQL client thread for each connection to the primary database Auxiliary connections SQFA (SQL Federated Agent) Main OpenEdge SQL connection to the Auxiliary database SQFC (SQL Federated Client) OpenEdge SQL client thread for each connection to the auxiliary database

    47. DB-10: What’s New in the OpenEdge® RDBMS OpenEdge Management User Activity View – Primary Database Primary database connections

    48. DB-10: What’s New in the OpenEdge® RDBMS OpenEdge Management User Activity View – Auxiliary Database Auxiliary database connections

    49. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements Wide table support Multi database query Query extensions Availability features Lets talk about the large database features introduced in OpenEdge 10.1B…Lets talk about the large database features introduced in OpenEdge 10.1B…

    50. DB-10: What’s New in the OpenEdge® RDBMS Group By Expression Grouping by alias Grouping by expression OpenEdge SQL has been extended to support expressions as “Group By” in 2 ways: 1. An alias, from the SELECT list, can be used as a Group By, and that alias can refer to an expression in the select list or a simple column; 2. An expression can be explicitly used as a Group By. This is similar to the current support for expressions as Order By. Group By <alias> - where alias can be used as a “grouping” The SELECT list of a query consists of a list of expressions, where each expression can have an attached AS phrase specifying a “column name” for the expression, which may be a simple column reference to a database table, or an actual expression composed with arithmetic operators, character operators, date operators, or scalar functions. The “column name” is essentially an alternate name. Group by <expression> The Group By clause can contain any type of unnamed, scalar expression which, taken as a whole, produces a value which is used as a grouping key. An individual column, when it is merely part of a larger expression in the Group By list, cannot by itself be referenced in the SELECT list. Only the entire expression, which is the grouping key, can be used in the statement’s SELECT list. An expression in a Group By clause cannot contain an aggregate function, such as SUM. OpenEdge SQL has been extended to support expressions as “Group By” in 2 ways: 1. An alias, from the SELECT list, can be used as a Group By, and that alias can refer to an expression in the select list or a simple column; 2. An expression can be explicitly used as a Group By. This is similar to the current support for expressions as Order By. Group By <alias> - where alias can be used as a “grouping” The SELECT list of a query consists of a list of expressions, where each expression can have an attached AS phrase specifying a “column name” for the expression, which may be a simple column reference to a database table, or an actual expression composed with arithmetic operators, character operators, date operators, or scalar functions. The “column name” is essentially an alternate name. Group by <expression> The Group By clause can contain any type of unnamed, scalar expression which, taken as a whole, produces a value which is used as a grouping key. An individual column, when it is merely part of a larger expression in the Group By list, cannot by itself be referenced in the SELECT list. Only the entire expression, which is the grouping key, can be used in the statement’s SELECT list. An expression in a Group By clause cannot contain an aggregate function, such as SUM.

    51. DB-10: What’s New in the OpenEdge® RDBMS Limiting Rows in a Result Set Returns a maximum of n rows to client Is interpreted last After GROUP BY, DISTINCT, aggregate phrases Performance boost on large tables Where n is small, relative to rows in table The “SELECT TOP n” feature provides a way to limit rows returned by an OpenEdge SQL query at the statement level. In cases where only the first n rows are to be returned to the client, and the result set is to be sorted, optimizations to increase the performance of the sort have been made. Example: SELECT TOP 5 name, balance, custnum FROM Pub.Customer ORDER BY balance DESC; This query returns the names of the 5 customers with the highest balance. When the TOP clause is specified, the OpenEdge SQL server returns a maximum of n rows to the client, where n is from 1 to 2,147,483,647 (2^32 – 1). The TOP n clause is only allowed in a “top-level” select statement, i.e. the following are all UNSUPPORTED: CREATE TABLE (as in CREATE TABLE (…) AS SELECT TOP 5…) CREATE VIEW (as in CREATE VIEW (…) AS SELECT TOP 5…) UPDATE (as in UPDATE CUSTOMER SET (custnum) = (SELECT TOP 5…)) INSERT (as in INSERT INTO CUSTOMER (custnum) SELECT TOP 5 …) In addition, the TOP n clause is not supported in the following contexts within SELECT statements: In subqueries (as in SELECT custnum FROM Customer where custnum in (SELECT TOP 5 …)) In derived tables (as in SELECT custnum FROM Customer, (SELECT TOP 5 …) dtbl …) In queries used with set operators (UNION, INTERSECT, MINUS) (as in SELECT TOP 5 … UNION SELECT …) There are some cases where the server also performs aggregation on the result set returned by a query, as with: Aggregate functions (for instance, SUM, or MAX) A GROUP BY clause (with or without a HAVING clause) The DISTINCT keyword In these cases, the TOP n clause can be considered to be applied last, after any aggregation has occurred. When there is no aggregation specified in the SELECT statement in the query as described above, and the result set is also sorted, then SQL will optimize sorting to increase the performance of the query. The TOP keyword in a SQL statement is a “reserved word” in OpenEdge 10.1B. In order to create or use a database object named “top”, it must be double-quoted. SELECT TOP and performance. If SELECT TOP is issued against a table that has a large number of rows, there would be a noticeable performance boost executing the SELECT TOP statement versus the same statement with no “TOP 10” clause. Some queries that return an entire sorted result set may take a prohibitively long time, but using SELECT TOP could make them more manageable. Queries that include a TOP n clause should perform the same as, or better than, equivalent queries without such a clause, since fewer rows are returned; and due to optimized disk and memory usage when sorting. In cases where ‘n’ is small compared to the number of rows being sorted, the potential for performance benefit is significant. The “SELECT TOP n” feature provides a way to limit rows returned by an OpenEdge SQL query at the statement level. In cases where only the first n rows are to be returned to the client, and the result set is to be sorted, optimizations to increase the performance of the sort have been made. Example: SELECT TOP 5 name, balance, custnum FROM Pub.Customer ORDER BY balance DESC; This query returns the names of the 5 customers with the highest balance. When the TOP clause is specified, the OpenEdge SQL server returns a maximum of n rows to the client, where n is from 1 to 2,147,483,647 (2^32 – 1). The TOP n clause is only allowed in a “top-level” select statement, i.e. the following are all UNSUPPORTED: CREATE TABLE (as in CREATE TABLE (…) AS SELECT TOP 5…) CREATE VIEW (as in CREATE VIEW (…) AS SELECT TOP 5…) UPDATE (as in UPDATE CUSTOMER SET (custnum) = (SELECT TOP 5…)) INSERT (as in INSERT INTO CUSTOMER (custnum) SELECT TOP 5 …) In addition, the TOP n clause is not supported in the following contexts within SELECT statements: In subqueries (as in SELECT custnum FROM Customer where custnum in (SELECT TOP 5 …)) In derived tables (as in SELECT custnum FROM Customer, (SELECT TOP 5 …) dtbl …) In queries used with set operators (UNION, INTERSECT, MINUS) (as in SELECT TOP 5 … UNION SELECT …) There are some cases where the server also performs aggregation on the result set returned by a query, as with: Aggregate functions (for instance, SUM, or MAX) A GROUP BY clause (with or without a HAVING clause) The DISTINCT keyword In these cases, the TOP n clause can be considered to be applied last, after any aggregation has occurred. When there is no aggregation specified in the SELECT statement in the query as described above, and the result set is also sorted, then SQL will optimize sorting to increase the performance of the query. The TOP keyword in a SQL statement is a “reserved word” in OpenEdge 10.1B. In order to create or use a database object named “top”, it must be double-quoted. SELECT TOP and performance. If SELECT TOP is issued against a table that has a large number of rows, there would be a noticeable performance boost executing the SELECT TOP statement versus the same statement with no “TOP 10” clause. Some queries that return an entire sorted result set may take a prohibitively long time, but using SELECT TOP could make them more manageable. Queries that include a TOP n clause should perform the same as, or better than, equivalent queries without such a clause, since fewer rows are returned; and due to optimized disk and memory usage when sorting. In cases where ‘n’ is small compared to the number of rows being sorted, the potential for performance benefit is significant.

    52. DB-10: What’s New in the OpenEdge® RDBMS Agenda Large Database Support Migration & Compatibility User I/O by Table / Index Online Maintenance OpenEdge SQL Enhancements Wide table support Multi database query Query extensions Availability features Lets talk about the large database features introduced in OpenEdge 10.1B…Lets talk about the large database features introduced in OpenEdge 10.1B…

    53. DB-10: What’s New in the OpenEdge® RDBMS Availability Features ALTER SEQUENCE REVOKE DROP VIEW CREATE SYNONYM DROP SYNONYM OpenEdge Release 10.1B continues the “high availability” theme started in earlier releases of OpenEdge by making possible even more online schema changes. In SQL terms, this means more Data Definition Language statements can be executed online, without the need for exclusive schema locks on the database (which prevent other users from accessing the database). This allows customers to keep their databases running, even for schema changes, especially important in 24 * 7 environments. In OpenEdge releases 10.0A through 10.1A, statements such as ALTER TABLE, ADD COLUMN, GRANT, and most of the CREATE statements, were made able to run online. The following additional statements have been added to 10.1B: CREATE SYNONYM, DROP SYNONYM, DROP VIEW, and REVOKE. Instead of requiring an exclusive schema lock these statements acquire a write serial lock, which while it prevents users from reading the schema, it does not prevent users from reading the schema cache. These commands still update the database schema timestamp (even though they are SQL commands) indicating that a change to the schema has occurred. Since this causes the schema cache to be invalidated, when a new operation starts, schema cache (for statement, statistics, stored procedure, and trigger caches) are re-read and re-cached . CREATE SYNONYM - A newly created synonym cannot be seen by other existing users since synonyms are not stored in the schema cache. DROP SYNONYM - A synonym or view that is dropped cannot be accessed by other users since the schema itself is always read to determine for what the synonym an alias is for, and to find the query used to construct the view. REVOKE - any active transaction (during a REVOKE) will have stale timestamp data in the caches after the revoke command is committed. Since there is no way to tell if a REVOKE has been performed, the caches are flushed any time a database schema change is made, including a CREATE TABLE command. However, once a user executes a new SQL command the server’s timestamp will be compared to the database schema timestamp to see if the database schema has been updated. If the schema has changed then all the caches will be flushed before the new SQL command is executed. OpenEdge Release 10.1B continues the “high availability” theme started in earlier releases of OpenEdge by making possible even more online schema changes. In SQL terms, this means more Data Definition Language statements can be executed online, without the need for exclusive schema locks on the database (which prevent other users from accessing the database). This allows customers to keep their databases running, even for schema changes, especially important in 24 * 7 environments. In OpenEdge releases 10.0A through 10.1A, statements such as ALTER TABLE, ADD COLUMN, GRANT, and most of the CREATE statements, were made able to run online. The following additional statements have been added to 10.1B: CREATE SYNONYM, DROP SYNONYM, DROP VIEW, and REVOKE. Instead of requiring an exclusive schema lock these statements acquire a write serial lock, which while it prevents users from reading the schema, it does not prevent users from reading the schema cache. These commands still update the database schema timestamp (even though they are SQL commands) indicating that a change to the schema has occurred. Since this causes the schema cache to be invalidated, when a new operation starts, schema cache (for statement, statistics, stored procedure, and trigger caches) are re-read and re-cached . CREATE SYNONYM - A newly created synonym cannot be seen by other existing users since synonyms are not stored in the schema cache. DROP SYNONYM - A synonym or view that is dropped cannot be accessed by other users since the schema itself is always read to determine for what the synonym an alias is for, and to find the query used to construct the view. REVOKE - any active transaction (during a REVOKE) will have stale timestamp data in the caches after the revoke command is committed. Since there is no way to tell if a REVOKE has been performed, the caches are flushed any time a database schema change is made, including a CREATE TABLE command. However, once a user executes a new SQL command the server’s timestamp will be compared to the database schema timestamp to see if the database schema has been updated. If the schema has changed then all the caches will be flushed before the new SQL command is executed.

    54. DB-10: What’s New in the OpenEdge® RDBMS In Summary Tremendous amount of stuff in OpenEdge 10.1B The OpenEdge Database can support all your data Availability and scalability improvements Many new SQL features

    55. DB-10: What’s New in the OpenEdge® RDBMS Questions?

    56. DB-10: What’s New in the OpenEdge® RDBMS For More Information, go to… PSDN Progress eLearning Community Database Administration Advanced Database Administration Using OpenEdge SQL Documentation

    57. DB-10: What’s New in the OpenEdge® RDBMS Relevant Exchange Sessions DB-12: Moving to OpenEdge DB-13: Progress VLDB DB-14: OpenEdge Database Run-time Security Revealed DB-15: Inside the Recovery Subsystem DB-21: RDBMS Roadmap and Info Exchange DB-22: Zero to 30,154 in Twenty Days

    58. DB-10: What’s New in the OpenEdge® RDBMS

More Related