1 / 20

Sorting with SAS Long, very long and large, very large Data

Sorting with SAS Long, very long and large, very large Data. Aldi Kraja Division of Statistical Genomics SAS seminar series June 02, 2008. Sort and merge example. data a; input id m1 $ m2 $ m3 $ DNAreserve ; datalines ; 1 1/1 1/2 1/1 12

haley
Download Presentation

Sorting with SAS Long, very long and large, very large Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sorting with SAS Long, very long and large, very large Data AldiKraja Division of Statistical Genomics SAS seminar series June 02, 2008

  2. Sort and merge example • data a; • input id m1 $ m2 $ m3 $ DNAreserve; • datalines; • 1 1/1 1/2 1/1 12 • 2 1/2 1/1 2/2 14 • 3 2/2 1/1 1/1 15 • 4 1/2 1/2 1/2 16 • 5 1/1 2/2 1/1 15 • ; • run; • proc sort data=a; • by id; • run;

  3. Sort and merge example (cont.) • data b; • input id age sex SBP DBP; • datalines; • 1 23 1 128 95 • 2 25 2 115 84 • 3 30 1 120 85 • 4 27 1 130 90 • 5 35 2 122 82 • ; • run; • proc sort data=b; • by id; run;

  4. Sort and merge example (cont.) • data ab; • merge a (in=in1) b (in=in2); • by id ; • if in1 and in2; run; • proc print data=ab; title "A and B merged"; run; • A and B merged Monday, June 2, 2008 • Obs id m1 m2 m3 DNAreserve age sex SBP DBP • 1 1 1/1 1/2 1/1 12 23 1 128 95 • 2 2 1/2 1/1 2/2 14 25 2 115 84 • 3 3 2/2 1/1 1/1 15 30 1 120 85 • 4 4 1/2 1/2 1/2 16 27 1 130 90 • 5 5 1/1 2/2 1/1 15 35 2 122 82

  5. Example 2: Join tables with SQL • proc sql; • create table sqlab as • select * • from a, b • where a.id=b.id; • quit; • proc print data=sqlab; • title "SQL joined tables"; run;

  6. Time: • Merge: • sorting a: • real time 0.01 seconds • cpu time 0.01 seconds • sorting b: • real time 0.01 seconds • cpu time 0.01 seconds • Merge: • real time 0.01 seconds • cpu time 0.01 seconds NOTE: PROCEDURE SQL used (Total process time): real time 0.01 seconds cpu time 0.01 seconds Test it with large and long data if there is any advantage of using proc sql

  7. Example 3: Sort flags (in the descriptor portion of a dataset) • The CONTENTS Procedure • Data Set Name WORK.A Observations 5 • Member Type DATA Variables 5 • Sort Information • Sortedby id • Validated YES

  8. Example 3: Sort flags (Cont.) • data one (sortedby=id); • input id; • datalines; • 1 • 4 • 3 • 5 • 2 • ; • run; • proc contents data=one; • title " data one with option sortedby=id "; • run;

  9. Example 3: Sort flags (Cont.) • proc sort data=one; • by id; run; • data two; • set one; • by id; • run; • proc sql; • create index id on one(id); • quit; • proc datasets nolist; • modify one; • index create id; • run;

  10. Sorting large data on many keys • Problems: • Disk space or temporary space may be inadequate • Time needed may be quite long • The software or the operating system may not work correct during the sorting of large data • Work directory normally is located under /tmp of a server. If my data to be sorted is 3 GB and the /tmp is set to 1GB can SAS do the SORT? • What about if 8-jobs run in parallel in the same server with 8 processors, and try to do SORT on different very large and long sets , but for different purposes?

  11. Example 4: tagsort option • data a; • input pedid id m1 $ m2 $ m3 $ DNAreserve; • datalines; • 1 1 1/1 1/2 1/1 12 • 1 2 1/2 1/1 2/2 14 • 1 3 2/2 1/1 1/1 15 • 2 6 1/2 1/2 1/2 16 • 2 5 1/1 2/2 1/1 15 • 2 4 2/2 2/2 1/2 12 • ; • run; • proc sort tagsort data=a nodupkey out=sorted_a; • by pedid id ; • run;

  12. Tagsort • Introduced in versin 6.07 • Can produce important improvements in clock time but increases the cpu time • Internally sort will store in the temporary files only the sort-keys and observation numbers • These sort-keys and the observation numbers are the “tags” of tagsort. • At the end of the sort, the tags are used to retrieve the entire record from the entire set, but now ordering them in sorted order. • Potential gains when the set is very large

  13. Example 5: Genestar project problem • 8 large text files • Read into SAS • 8 SAS datasets • The data are very large S1-S400 By 1,044,977 S1-S400 By 1,044,977 S1-S687 By 1,044,977 S1-S149 By 1,044,977

  14. Genestar project problem • A. split data for each subject as a new dataset • d1-d3236 • B. split data for each subject into 25 chromosomes • d1c1-d1c25 …….. • d3236c1-d3236c25 • Transpose markers by batches of 200 markers at a time and place data together for a chromosome • Finally with proc append, place together subjects of the same chromosome. Subject m1 m2 … 1 0.7560 0.76899 2 0.9999 0.98999 ……………… Subject marker genogenocall 1 m1 1/1 0.7560 1 m2 1/2 0.76899 ……………… Subject m1 m2 … 1 1/1 1/3 2 1/2 3/3 ……………… started ended

  15. Sort in the genestar project • sas -memsize 16G pgm.sas & • MPRINT(SORTIT): proc sort data=in1.rawdataf8 nodupkey out=a (keep=barcoden) ; SYMBOLGEN: Macro variable BYL resolves to barcoden MPRINT(SORTIT): by barcoden ; MPRINT(SORTIT): run; • NOTE: There were 718126154 observations read from the data set IN1.MYDATA. ERROR: Insufficient memory. NOTE: The SAS System stopped processing this step because of errors. NOTE: SAS set option OBS=0 and will continue to check

  16. Sort on large data, is it necessary? • I resolved the problem in the following way: a) removed from the data every other variable and kept only the by variable in the set. b) only after a), the sorting with nodupkey worked. • In addition where I had another similar sorting, I removed the sorting and used steps that do the same thing without sorting. • Only now the program does not run out of memory, which means that SAS did not have limit toward the number of observations, but the limit was on the memory use in our server (needed more than 16GB of mem) ???. (32/64b issues and -memsize 0)

  17. Example 6, sort with SQL • proc sql; • create table sql_a as • select * • from a • order by pedid, id; • quit;

  18. Example 7: merge with index without sorting data • proc contents data=a; • title "a is not sorted"; • run; • proc contents data=b; • title "b is not sorted"; • run; • data a_index (index=(id)); • set a; • run; • data b_index (index=(id)); • set b; • run; • data final; • set b_index ; • set a_index key=id; • run; • proc print data=final; • title "Merged data based on index= id"; • run;

  19. Problems with indexing • Indexing can be faster than sorting • The difference can be significant in large data • SAS will create an extra file for the index and this will be a large file. For example in a 1.2GB dataset SAS may create an index file of ~ 340 MB • Advantage: a set indexed on many variables can be used as just sorted in one of the variables • Proc datasets has an index, also SQL has indexing: for example • proc datasets library=work; modify a; create index idlist=(pedid id); run;

  20. Readings: • Paul M. Dorfman. QuickSorting an array. Paper 96-26. • Paul M. Dorfman. Table look-up by direct addressing: key indexing – Bitmapping – Hashing. Paper 8-26 • Paper 075-29 • Randomly Selecting Observations • Robert Patten, Lands’ End, Dodgeville, WI • http://www2.sas.com/proceedings/sugi29/075-29.pdf

More Related