1 / 15

ESRI User Conference 2004

ESRI User Conference 2004. ArcSDE. Some Nuggets. Setup Performance Distribution Geodatabase History. Setup. Initial Database Create as large as possible to avoid fragmenting indexes Set growth rate to a large value for same reason Fillfactor Set very high for read only environments

loo
Download Presentation

ESRI User Conference 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESRI User Conference 2004 ArcSDE

  2. Some Nuggets • Setup • Performance • Distribution • Geodatabase History

  3. Setup • Initial Database • Create as large as possible to avoid fragmenting indexes • Set growth rate to a large value for same reason • Fillfactor • Set very high for read only environments • 95-100% • Set midrange for editing environments • 75% default • Users • 9.0 supports Windows groups but they can not own data • Windows groups are more efficient than individual logins • ArcGIS checks sysprotects table for permissions

  4. Setup (cont’d) • Permissions • Data Loader • Create Table • Create View • Create Procedure • Data Editor • Select • Insert • Update • Delete • Execute • Use ArcCatalog to assign permissions • Associated tables may be missed if using DBMS to assign permissions

  5. Setup (cont’d) • Logfiles • User logfiles • An individual logfile table is created once for each user • Session logfiles (recommended for SQL Server) • An individual logfile table is created and destroyed for each login session • Maxlog = 0 • Allowsessionlog = true • Logfilepoolsize = 0 • Pooled logfiles • A series of logfile tables are created once • Each login uses one of the pooled logfiles • Need to provide enough tables for all logins

  6. Setup (cont’d) • Upgrading • Backup • Uninstall 8.3 • Install 9 • run checkschema.exe – 25 errors at a time max. • run custom post-installation – don’t create new dbase or SDE user if upgrading • run migratory sample utility – to convert each spatial database to a series of single standalone spatial databases • Create new service for each single spatial database • Recommended storing rasters in a separate database • Only store separately if they are completely separate (no cross database queries supported) • Follow up migration with sp_change_users_login to synchronize ID’s

  7. Performance • Memory • Initially set SQL Server memory to 50% RAM and move up from there • Setting SQL Server memory too high will restrict gsrvr.exe • 1 giomgr.exe is spawned at runtime • 1 gsrvr.exe is spawned for each connection • Make sure dbinit.sde is empty unless debugging • Stores detailed initialization information • Set TCPKeepAlive to ON to close orphaned gsrvr.exe processes

  8. Performance (cont’d) • Reference Classes • Only use during editing • All reference classes need to be loaded when used and this sucks up memory • Excess number of privileges will slow things down • Servers start queuing SQL statements when they hit 65% CPU usage • Index Tuning Wizard • Use to check often used queries for performance

  9. Performance (cont’d) • Loading • Set the recovery model to simple • Avoids tracking everything when you don’t need to rollback • Turn off the transaction logfile when loading • Bottlenecks • Server processor should never exceed 80-90% capacity • Memory buffer cache hit rate should always be below 90% • Average disk queue length should not exceed 2

  10. Performance (cont’d) • Logfiles (check these) • Giomgr.log – client connection information • Sde.log – detailed error logs • Sdedc.log – direct connect information • Upgrade.log – patch information • Compress the database at least once a week • The single most important thing that will effect performance of your geodatabase is: The # of states for a version (Lineage)

  11. Distribution • Geodatabase Replication • Single Generation (9.0) (a.k.a. Disconnected Editing) • Check out from one versioned GDB to another • Check changes back into original • Multi Generation (9.1) • 1 way replication • Used to replicate read only data (mirror) • Only pumps changes to the child database • 2 way replication • Changes can be sent both ways • Allows to independent sites to edit data and exchange edits • Uses a Global ID (GUID) on each feature • Works on specific versions • Synchronization Process (Parent wins, Child wins, Manually Reconcile)

  12. Distribution (cont’d) • Replica Agent (9.1) • Allows setting up a scheduled task to determine when changes move between the databases • They are developing a web service based way of automating change movement • Enables disconnected synchronization

  13. Geodatabase History • Store changes over time using versions (9.0) • History of the DEFAULT version of the database • Developer sample for inspecting the history of an individual feature (“a feature’s lineage”) • (-) Historical versions cannot be edited • (-) Significant performance hit if require fine-grained history (i.e., OK for snapshot archiving, not OK for transaction level archiving) • (-) If schema changes, history is lost.

  14. Geodatabase History (cont’d) • A Transaction Level History Solution (9.1) • Each feature class (FC) has an associated history table (e.g., Parcel_history) …identical to business table (except includes “in date” and “out date”) • (+) History not lost with schema change. • (?) Performance • (-) Does not capture change in related tables

  15. Geodatabase History (cont’d) • Version Log (9.1) • Compare two versions and show me the differences. • Useful for situation where workflow dictates that another person review the edits before committing them to the database.

More Related