1 / 26

This file contains additional materials to support our paper.

Supplemental material to support: Domain Agnostic Online Semantic Segmentation (title is provisional). This file contains additional materials to support our paper. We made two claims in our paper that we did not have space to show. Here we repair that omission.

tracyjensen
Download Presentation

This file contains additional materials to support our paper.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supplemental material to support: Domain Agnostic Online Semantic Segmentation (title is provisional) • This file contains additional materials to support our paper.

  2. We made two claims in our paper that we did not have space to show. Here we repair that omission. Claim 1: We can reproduce the results of Autoplait, on their own data. Claim 2: We can do multidimensional segmentation. See also the next slide. (from Autoplait paper) 1 0.8 Here is the code subplot(211); hold on; VarName1 = VarName1-min(VarName1); % zero-one normalize VarName1 = VarName1/max(VarName1); % so it matches the VarName2 = VarName2-min(VarName2); % Autoplait plots VarName2 = VarName2/max(VarName2); VarName3 = VarName3-min(VarName3); VarName3 = VarName3/max(VarName3); VarName4 = VarName4-min(VarName4); VarName4 = VarName4/max(VarName4); plot(VarName1(1:1:end),'b'); % use same colors as plot(VarName2(1:1:end),'g'); % Autoplait. plot(VarName3(1:1:end),'r'); plot(VarName4(1:1:end),'c'); [c1] = RunSegmentation(VarName1(1:1:end), 90); [c2] = RunSegmentation(VarName2(1:1:end), 90); [c3] = RunSegmentation(VarName3(1:1:end), 90); [c4] = RunSegmentation(VarName4(1:1:end), 90); subplot(212); Multi_D_CAC = (c1+c2+c3+c4)/4; % combine the 4 D Multi_D_CAC(1:5*90) = 1; Multi_D_CAC(end-5*90:end) = 1; plot(Multi_D_CAC); 0.6 0.4 0.2 0 0 2000 4000 6000 8000 1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0 2000 4000 6000 8000

  3. Dear Reviewer: • We made two claims in our paper that we did not have space to show. Here we repair that omission. • Claim 1: We can reproduce the results of Autoplait • Claim 2: We can do multidimensional segmentation 1 0.8 0.6 0.4 1 0.2 0.95 0.9 0 0 2000 4000 6000 8000 The minimum value (red) divides run|jump The second smallest value (green) divides jump|wave The third smallest value (blue) divides walk|kick The forth smallest value (orange) divides kick|jump The fifth smallest value (maroon) divides jumpL|jumpR The sixth smallest value (pink) divides wave|walk The seventh smallest value (lightgreen) divides jump|walk The eight smallest value (cyan) divides walk|run 0.85 0.8 0.75 0.7 0.65 0.6 0 2000 4000 6000 8000

  4. The performance of four rivals and choosing the best subsequence compared to FLUSS Table 5 of the paper with adding Best Subsequence result For the paper, we guessed the best setting of the single parameter, before doing any experiments. We picked about one period (one heartbeat etc.) This row shows how well we could have done, had we picked the best parameter

  5. The results of AutoPlait The rows without any scores means AutoPlait could not find any segments. Key: Red means AutoPlait loose FLUSS, Green means AutoPailt draw over FLUSS, Black means Floss loose AutoPlait See the paper for the definition of “draw”.

  6. The result of HOG-1D Key: Red means HOG-1D loose FLUSS Green means HOG-1D draw over FLUSS Black means Floss loose HOG-1D See the paper for the definition of “draw”.

  7. Random Key: Red means Random loose FLUSS Green means Random draw over FLUSS Black means Random loose HOG-1D Written Code:randScore = 0;randFunctionRepeat = 100; for(i = 1:1:randFunctionRepeat)randPredictedResult = floor(dataLength*rand(1,numSegms));randScore = calcScore(groundTruthSegPos, randPredictedResult, dataLength) + randScore ; endrandScore = randScore / randFunctionRepeat;

  8. BestSubsequence Key: Red means Random loose FLUSS Green means Random draw over FLUSS Black means Random loose HOG-1D

  9. On setting the only parameter, the subsequence length We set the parameter by ‘eyeball’. Looking at the most periodic part of the data, we chose about one period. As the results in the paper show this worked very well. However, in a post-hoc test of all lengths, it seems that shorter lengths work a bit better in most cases. However, our most important finding is that the parameter does not matter much. Consider the DutchFactory_24_2184 dataset. Our parameter was 24, and that work very well. However the plot below shows that if we doubled that parameter, or halved it, we would have gotten especially identical results. In the following slides we show this curve for all datasets, with brief annotations where warranted. 0.02 Our scoring function, smaller is better 0.015 0.01 0.005 0 0 20 40 60 80 100 120 140 Subsequence length

  10. For all these datasets, we would have done even better with a shorter subsequence length. RoboticDogActivityY_64_4000 RoboticDogActivityY_60_10699 RoboticDogActivityX_60_8699

  11. For all these datasets, we would have done even better with a shorter subsequence length. PulsusParadoxusSP02_30_10000 PulsusParadoxusECG2_30_10000 PulsusParadoxusECG1_30_10000

  12. For power demand we would have done even better with a shorter subsequence length. Powerdemand_12_4500 For Cane, we could have made the subsequence length a lot longer with no change, but a shorter length would have unstable. Important to note, even the high regions here are still superhuman/ Cane_100_2345

  13. PigInternalBleedingDatasetCVP_100_7501 PigInternalBleedingDatasetAirwayPressure_400_7501 PigInternalBleedingDatasetArtPressureFluidFilled_100_7501

  14. GrandMalSeizures_10_8200 GrandMalSeizures2_10_4550 -3 10 2 1 0 0 5 10 15 20

  15. NogunGun_150_3000 DutchFactory_24_2184

  16. EEGRat_10_1000 -3 10 6 4 2 0 0 2 4 6 8 10 12 14 16 18 20 EEGRat2_10_1000 Fetal2013_70_6000_12000

  17. For all these datasets, we would have done even better with a shorter subsequence length. GreatBarbet1_50_1900_3700 GreatBarbet2_50_1900_3700

  18. InsectEPG1_50_3802 InsectEPG2_50_1800 InsectEPG3_50_1710 InsectEPG4_50_3160

  19. SuddenCardiacDeath1_25_6200_7600 0.025 0.02 0.015 0.01 0.005 0 0 10 20 30 40 50 For the datasets below, we would have done even better with a shorter subsequence length. SuddenCardiacDeath2_25_3250 SuddenCardiacDeath3_25_3250

  20. For all these datasets, we would have done even better with a shorter subsequence length. TiltECG_200_25000 TiltABP_210_25000 SimpleSynthetic_125_3000_5000

  21. For all these datasets, we would have done even better with a shorter subsequence length. WalkJogRun1_80_3800_6800 WalkJogRun2_80_3800_6800

  22. Note to Reviewers • In the paper we claimed: • “Egress: When a point is ejected, we must update all subsequences (if any) in the sliding window that current point to that departing subsequence. This is a problem, because while pathological unlikely, almost all subsequences could point to the disappearing subsequence. This would force us to do O(n2) work.” • While this situation is pathologically unlikely, it can happen in the real world. Below we show such an example. • If the green bracket is our sliding window, then almost all subsequences will point to last subsequence. • Why is this true? • One need only know two facts (for z-nomalized data) • Any two random noisy vectors are about the same distance apart, and that distance is close to the dimeter of your data space (in other words, maximally far apart). • Any random noisy vector is relatively close to any much smoother vector. • If you understand that, then clearly all the noisy vectors will have their nearest neighbor in the last subsequence, which is much smoother.

  23. Below is just high-res versions of plots in the paper

  24. Mean value of CAC on negative training data samples Decision Threshold Negative Positive 0 0.5 1

  25. Mpindex(1270) = 1892 Mpindex(1892) = 1270 Mpindex(3450) = 4039 Mpindex(4039) = 3844 time series Theoretical Empirical 1 2500 0.8 0.6 Bi-directional IAC 2,500 One-directional IAC 0.4 Sampled IAC1D 2,000 0.2 Empirical IAC1D 0 0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 The number of arcs that cross a given index, if the links are assigned randomly 0 1000 2000 3000 4000 5000 4607 4039 1892 1270 corrected arc curve 3 2 1 0 1500 3450 1270 1892 4039 0 45 0 minutes 1000 500 0 0 1000 2000 3000 4000 5000 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0 1000 2000 3000 4000 5000 0 0 1000 1000 2000 2000 3000 3000 4000 4000 5000 5000 WS NM ND RU CY RU NM SO RJ a=load('tiltABP_210_2500.txt'); na=load('tiltABP20dBNoise_210_2500.txt'); subplot(2,1,1); plot(a); subplot(2,1,2); plot(na); GT 1 GT 2 GT 3 GT 4 Ground Truth (GT) 0 0 1,000 1,000 2,000 2,000 3,000 3,000 4,000 4,000 5,000 5,000 Experimental (E) E 1 E 2 E 3 E 4

  26. PigInternalBleedingDatasetArtPressureFluidFilled PigInternalBleedingDatasetCVP PulsusParadoxusECG1 PulsusParadoxusSP02 GreatBarbet1 Knees (alternating) bending forward  Left Upper Arm Accelerometer X SuddenCardiacDeath2 Knees bending (crouching) RoboticDogActivityX TiltECG EEGRat2 60,000 60,800 61,600 0 1000 2000 3000

More Related