Photogrammetry and Remote Sensing

Photogrammetry and Remote Sensing PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

A measurement process of collecting spatial information remotely . Our initial focus is on use of aerial photography. An aerial photo is not a map. A map has one scale. An photo's scale changes as the distance from the exposure to the ground changes . Example. Two football fields at two elevations

Download Presentation

Photogrammetry and Remote Sensing

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

1. Photogrammetry and Remote Sensing

2. A measurement process of collecting spatial information remotely Our initial focus is on use of aerial photography

3. An aerial photo is not a map. A map has one scale. An photo’s scale changes as the distance from the exposure to the ground changes

4. Example Two football fields at two elevations will not have the same length on an aerial photo. Similarly, a ground based exposure of two six foot tall people different distances from the exposure will be of different length

5. An aerial photo is not a map A map is a projection of points to a defined elevation (often sea level). A photo is a “view” from a single vantage point (the exposure). A single vantage point does not allow projection of points to a defined elevation.

6. An aerial photo is not a map A map is an orthographic view. You are always looking straight down at a feature. A photo is a perspective view. It is what a part of the earth looks like from a unique position (the exposure).

7. A photograph is not a map. A map cannot be “tilted” as it is in a defined projection. An aerial photo can contain tip, tilt, or crab which can cause distortion in an image.

8. A photograph is not a map. A map contains a finite amount of detail – points, lines, and text. A photo contains an almost infinite amount of detail (to the pixel level)

9. A photograph is not a map. Symbology is used to define what points and lines are on a map. A human being defines what objects are on a photo.

10. A photo is not a map. A map has a defined projection – usually state plane or Universal Transverse Mercator (UTM) A user must define a coordinate system for a photo.

11. A photo is not a map A map does not contain relief displacement as all points are projected to a defined elevation. A photo contains relief displacement.

12. What is evidence of relief displacement? On a map a vertical object (power pole, building corner, etc.) has only one position. On an aerial photo it is often possible to see both the bottom and top of vertical objects. As if different points on a vertical object have different horizontal coordinates.

13. But regarding relief displacement It exists at all points relative to a defined elevation. But it is very apparent on vertical objects unless the object is directly below the center of the photo.

14. Typical metric film based aerial camera 6 in. focal length (longer than hand held camera); sometimes 3.5 or 12 in. 9 x 9 in. format (hand held has 35 mm format – note a 1:1 contact print is 9 in. x 9 in. (no enlargement) Large film magazine (storage) Vacuum film flattening – minimal distortion due to film unflatness All photogrammetric equations assume the negative is flat!

15. Typical metric film based aerial camera Fiducial marks – artificial marks in sides or corners of negative frame that appear in every exposure Fiducial marks are used to measure film shrinkage or expansion Fiducial marks are used to define x,y photocoordinate axes

16. Typical metric film based aerial camera The intersection of lines connecting opposite fiducial marks estimates the location of the principal point The principal point is where a a line from the rear nodal point of the lens intersects the negative plane at a perpendicular angle. Nodal point is where all light rays intersect in the lens

17. Typical metric film based aerial camera Forward motion image compensation (FMC) Since the airplane is moving quickly when exposures are being made, the negative can move slightly forward during exposure to account for the movement.

18. Focusing an aerial camera Not important as at 100 ft. distance to an object a hand held camera is focused for infinity Lens law is 1/(image distance) + 1/(object distance) = 1/(focal length) Note image distance is the actual “focusing” distance negative to lens nodal point (where all light rays come together) At 100 ft. object distance and 6 in. (0.5 ft.) aerial camera focal length lens law gives image distance of 6.03 in. (insignificant change)

19. Image products Contact Print – a 1:1 positive on paper Diapositive – a 1:1 positive on glass (old) or plastic for precise photogrammetric measurement Scanned image – A typical digital image where the negative or diapositive was processed with a precise scanner capable of preserving photogrammetric accuracy

20. Aerial camera calibration Usually performed at a federal government facility Calibrated focal length (6 in. is not a perfect value) Lens distortion – radial and tangential Film unflatness Image resolution Shutter efficiency

21. Aerial camera calibration Fiducial point coordinates Principal point coordinates 2 types of principal points (1) autocollimation – perpendicular from rear nodal point of lens (2) symmetry – the point radial lens distortion is symmetric about

22. Simple calculations If an aerial photo can be assumed to be vertical (no tip or tilt) similar triangles can be used to solve for several important types of information Note these equations are thus only approximations, but have important uses.

23. The scale equation ab/AB = f/H’

24. The scale equation ab f S = ----- = ------- AB H' where S = scale ab = photo distance AB = ground distance (horizontal) f = focal length (6 inches for most film based aerial cameras) H' = flying height above line AB

25. Note H’ = H – h Where H = flying height above datum (usually sea level) h = elevation above datum

26. The scale equation can be rewritten many ways. ab f ----- = ------- AB H‘ Can be AB = H’ * ab /f Or ab = AB * f / H’ Or H’ = AB * f / ab Assume f = 6 in. unless explicitly stated otherwise. (6 in. = 152.4 mm.)

27. Let's assume a relatively flat area is on a photo. Measuring a photographic distance of a known ground horizontal distance allows us to solve for flying height. AB H' = f * ---- ab

28. Once a flying height is determined other photo distances can be measured and one can solve for a ground distance. H' * ab AB = ------- f

29. Example A football field (goal line to goal line) measures 0.6 in. on an aerial photo. What is the flying height? H’ = AB * f / ab = 300 ft. * 6 in. / 0.6 in. H’ = 3000 ft.

30. Example At a flying height of 1200 ft., a building edge measures 0.15 in. on an aerial photo. What is its ground length? AB = H’ * ab /f = 1200 ft. * 0.15 in. / 6 in. AB = 30 ft.

32. Modern photogrammetric measurement has a resolution/least count of 0.001 mm (1 micron) and a measuring ability of 10 microns on distinct features. Change the previous problem to this measuring ability. AB = H’ * ab /f = 1000 ft. * 0.01 mm. / 152.4 mm = 0.07 ft. Which approaches our accuracy achievable with ground based techniques!

33. Flying height limits Fixed wing aircraft are limited to 1200 ft. in urban areas and 1000 ft. in rural areas. A helicopter can be used to obtain lower flying heights for higher accuracy. A helicopter is more expensive to use than a fixed wing aircraft. Lower flying height means less coverage on one 9 x 9 in. format exposure.

34. Example How many acres (1 acre = 43560 sq. ft.) are on a photo at a flying height of 10000 ft.? AB = H’ * ab /f = 10000 ft. * 9 in. / 6 in. AB = 15000 ft. # acres = 15000 ft. * 15000 ft. / 43560 # acres = 5200 acres

35. Measuring heights of object by relief displacement Assuming vertical photography Since both the top and bottom of a vertical object appear on an aerial photo, the height of the object can be estimated. Note relief displacement is the photographic distance of a vertical object. It will grow as the object is displaced further from the center of the photo.

36. Relief displacement – side view h = height of vertical object

37. Relief displacement on photo r = radial dist. Prin. Pt. to top of object d = relief displacement = photo dist. Bottom to top of object Note relief displacement is along a radial line.

38. Relief displacement By similar triangles d/h = r/H’ Or h = d*H’/r h = actual height of object H’ = flying height above bottom of object Note d = 0 at principal point so relief disp. cannot be measured unless offset from it.

39. Relief displacement h = d*H’/r Given flying height and the ability to measure d and r on a vertical photo An object’s height can be determined Note the equation can also be written d= h*r/H’ or r=d*H’/h or H’=h*r/d

40. A building edge’s top is 3.5 in. from the center of a photo and its vertical edge measures 0.05 in. on the photo. If the flying height of the photo is 3000 ft. what is the height of the building? h = dH’/r = 0.05 in. * 3000 ft. / 3.5 in. h= 43 ft. Note if the same dimensions existed except the top of the building was half the distance from the photo center, h would double in magnitude!

41. The flying height is 1200 ft. If one’s measuring ability on a photo is 0.01 in., and once desires to measure vertical objects to a resolution of 5 ft., how far does the top of the object need to be displaced from the center of the photo? r=d*H’/h = 0.01 in. * 1200 ft. / 5 ft. r = 2.4 in.

42. Error in horizontal location due to relief displacement Remember points on a photo need to be projected to a map projection’s datum that is usually sea level. This creates the orthographic projection To perform the projection one needs to know the elevation of the point. Error in elevation results in error in proper projection horizontal location

43. Example . A 20 ft. error in elevation is located 4 inches from the center of a photo whose flying height is 1200 ft. The 20 ft. error can be considered relief displacement. 4 in * 20 ft. d = --------------- = 0.07 in. 1200 ft. The 0.07 inches represents a horizontal distance error. Using the scale equation its horizontal ground distance would be: 1200 ft * 0.07 in. AB = ------------------ = 14 ft. 6 in. In other words a 14 ft. difference in ground position is attributable to a 20 ft. elevation difference! This shows why elevation difference must be accounted for.

44. Thoughts on the horizontal error due to elevation error example Points nearer the principal point have less error as “r” (radial distance to top of object) is small Higher flying heights can relate to greater map positional error due to measuring error relating to larger ground distance Error in the elevation model being used is very difficult to determine, and will change value for different points

45. Measurement of elevation difference by parallax Photogrammetry is capable of measuring elevation differences through the use of parallax. Parallax is defined as the apparent displacement of a point due to a change in view of the point.

46. Parallax human eye example Hold your finger out in front of you and look at where it is relative to a wall in the background with your right eye. Then look at it with your left eye and its appearance relative to the wall has changed. The relative change in appearance is due to parallax.

47. Parallax in photogrammetry is the change in position of the same point on two overlapping photos due to the change in position of the exposures. The change is along the flight line between the exposures of the overlapping photos which is roughly defined as the X axis for photocoordinate measurement. Parallax = x left photo coor. – x right photo coor. (assuming x axis parallel to flight line)

48. Epipolar line – any line on a photo parallel with the flight line axis – parallax occurs along epipolar lines Defining the flight line requires locating a “conjugate principal point” which is a principal point image transferred to its location on an overlapping photo. The line between the principal point and a conjugate principal point is the flight line axis

49. Now think of a point close to the overlapping exposures and a point far away. The closer point will "shift" more than the point further away. In aerial photography points of higher of elevation will have larger parallaxes than points of lower elevation (further from the exposures).

50. Assuming vertical photography and exposures from the same flying height, elevation difference is determined by: dp * H' dh = ha - hc = ----------- pc where dh = change in elevation between two points a and c dp = parallax point a - parallax point c H' = flying height pc = parallax of point c where parallax of a point is the change in x coordinates where the x axis is parallel to the flight line.

51. Note any point being measured has to appear on both photos that overlap!!! If point c has a known elevation (benchmark) and its parallax can be measured: Any point a whose parallax can be measured can have an elevation difference from c to a computed for it thus: Any point a’s elevation can be computed relative to point c’s known elevation Conclusion – parallax measurement enables computation of an elevation model

52. Example Benchmark c has an elevation of 1545.32 ft., x coor. on left photo of +74.12 mm and on right photo of -18.41 mm. Unknown point a has x coor. On left photo of +65.78 mm and on right photo of -24.38 mm. If flying height above average ground is 3000 ft. what is the elevation of point a? Parallax is the change in x coordinates defined as parallel to the flight line. Parallax c = 74.12 – (-18.41) = 92.53 mm Parallax a = 65.78 – (-24.38) = 90.16 mm

53. dp * H' dh = ha - hc = ----------- pc dp = pa – pc = 90.16 – 92.53 = -2.37 mm dh = (-2.37 mm) * (3000 ft.) / 92.53 mm dh = -76.84 ft. Elev a = elev c + dh = 1545.32 + (-76.84) Elev a = 1468.48 ft. Note point a has less parallax so it is at a lower elevation than point c.

54. The standard format of an aerial photo is 9 in. by 9 in. standard overlap between successive photos in a flight line is 60%, which means an advance of 40% of the format. 40% of 9 in. is 3.6 in. and can be assumed to be the "average" parallax for points on a photo. The ground distance related to the advance between photos is known as the air base (3.6 in. * H’ / focal length)

55. Let's use this parallax of 3.6 in. as point c and assume a 1200 ft. flying height. Assuming the accomplished stereoplotter operator can measure parallax differences pessimistically to 0.010 mm: 0.010 mm * 1200 ft dh = --------------------------- = 0.13 ft. 3.6 in * 25.4 mm/in

56. This same type of computation at the same flying height was performed for horizontal ground measuring resolution and resulted in 0.08 ft. Thus photogrammetry is more capable of producing higher horizontal than vertical accuracies. This is offset in production by utilizing more vertical control points when compared to required horizontal control points.

57. Ground X,Y coordinates from parallax (0,0) is the left exposure and +X is in the flight line direction X = (B/p) * x and Y = (B/p) * y Where X,Y = ground coor. x,y = photo coor. on left photo based on flight line axis being the x axis p is the measured parallax of the point B is the air base

58. Air Base calculation Find a conjugate principal point as previously discussed. Measure photo distance from principal point to conjugate principal point (o-o’) Multiple o-o’ by H’/f as in the scale equation Note now you have ground X,Y,Z coordinates from parallax!!

59. Note it is very easy to enter different flying heights and measuring resolutions in the scale, relief displacement, and parallax equations. Also realize since these equations make several assumptions they are only useful for rough computations, and not final map production.

60. The elevation difference accuracy of 0.13 ft. from 1200 ft. flying height illustrates the limit of fixed wing aircraft photogrammetry If elevation accuracies of 0.10 ft. or less are required for a project, one has to consider helicopter photogrammetry as it allows lower flying heights But lower flying height means more photos per project and thus higher costs

61. Elevation difference by parallax vs. relief displacement Relief displacement can only measure vertical objects Parallax measures elevation difference between any two points in same overlap region between two photos Relief displacement uses only one photo but needs the vertical object displaced from the center of the photo. Both simplified equations assume vertical photographs

62. Scale, relief displacement and parallax equations All simple equations based on vertical photography and similar triangles Have excellent use for “rough” measurements with a scale Have excellent use in estimating accuracy of product by placing measuring error in photo distance, relief displacement photo distance, or parallax difference unknowns.

63. Stereo Viewing 3-D movies – the latest! 2 cameras offset in position film the set Both images are displayed on the movie screen or TV at different “frequencies” Your glasses force the left eye to only see the left image, and the right eye only see the right image This creates a stereo image (3-D effect)

64. Stereo Viewing – Aerial Photography Camera axes are near vertical Exposure stations are taken so two successive photos overlap approximately by 60% Forcing your left eye to view the left image and right eye view the same portion of the photo on the right image creates stereo viewing

65. Possible ways to view in stereo Forced viewing in oculars 1 eye:1 image-stereoscope Color – red vs. blue/green image & glasses Shutter Image Alternators – SIA – shutters move very quickly in sync so left open when right closed, and vice versa Polarized viewing – same polarized in left image & left viewing and vice versa for right

66. Is stereo vision required to measure parallax? If a point is monoscopically identifiable on both photos (manhole, end of paint stripe, sidewalk intersection, driveway corner, etc.) one can measure without stereo viewing A monoscopically identifiable image can have coordinates measured for it on any photo, enabling a parallax computation

67. Is stereo vision required to measure parallax? If a point is not monoscopically identifiable (point in grass or dirt, random point on pavement, etc.) its parallax measurement requires stereoviewing The big question – how can the same exact point on a grass lawn be identified on multiple photos?

68. General parallax measurement requires superimposition! Superimposition – if an artificial mark is placed on one photo (an “X” in the grass) and that photo is viewed in stereo with an overlapping photo not containing the “X”: In stereo the X will “appear” to be in both photos. Therefore it can be located on the photo where it does not “appear”. An example would be placing a mark with a pencil in that location on the photo where it does not appear.

69. General parallax measurement requires superimposition! Superimposition “tricks” our stereo perception ability into seeing the single mark on both photos. This allows the same undefined image location to be “transferred” to overlapping photos If the mark on the overlapping photos is erroneously located, in stereo one will see both marks instead of the superimposition of both marks

70. Floating mark/half mark Artificial marks are superimposed on a left and right image on an epipolar line Holding the left mark fixed in position, the right mark is allowed to move along the epipolar line (or vice versa) When the two marks are close to the same image location they will appear to merge and rise and fall relative to the stereo image

71. Floating mark/half mark When the “merged” mark appears to be “on the ground” in the stereo image you are measuring the same image point in both photos. This enables a parallax measurement. Measuring using the floating mark is very difficult at first as we are not used to stereo measurement. An accomplished photogrammetrist performs this task with routine ease due to hours of practice.

72. Modern Photogrammetric Parallax Measuring Ability Modern photogrammetric techniques allow photocoordinates, and therefore parallax, to be measured to 0.006 – 0.030 mm (or 6 – 30 microns as 1 micron = 0.001 mm) Measuring ability is affected by operator ability, image quality, and whether the image is distinctly monoscopically identifiable or measured through stereo viewing

73. Coordinate Transformations Various parts of photogrammetry involve both 2-D and 3-D coordinate transformations. Many measurements, and some interim calculations, are performed in assumed (arbitrary) coordinates systems. The assumed coordinates need to be scaled, rotated, and translated into coordinate systems with defined references. If the same point has coordinates in both systems it can be used to determine the coefficients for the transformation

74. Photocoordinate measuring Precise x,y coordinate measurement was performed on mechanical devices called comparators with detailed visual magnification for a user. Monocomparators measured one photo, stereocomparators allowed measurement of photocoordinates on two overlapping photos. Comparators are digitizing tablets on steroids, as their least count resolution was usually 1 micron with actually coordinate measuring ability in the 3-30 micron range.

75. Comparator evolution Comparators initially had dial type coordinate readouts Encoding the dials allowed the coordinate measurements to be stored on a computer Stereoplotters combined coordinate measuring, orientation of stereo photos, stereo vision, and map compilation Today an image (or images in stereo) on the computer screen is the comparator Pixels on a computer screen are an arbitrary coordinate system

76. What does comparator mean? Comparators, including digitizing tablets, actually measure coordinate differences. The origin of the comparator system is very arbitrary as its location is not important to the ensuing measurement process. It is similar to assumed coordinates in plane surveying. The location of the origin is not important.

77. 2-D photocoordinate measurement Fiducial marks have camera calibration derived photo coordinates relative to the principal point. Inner orientation is the process of measuring fiducial marks in the comparator arbitrary coordinate system These arbitrary coordinates are associated with the calibration fiducial coordinates.

78. 2-D photocoordinate measurement Film can shrink or expand example Monocomparator measurement of two fiducials (in mm.) x = -0.246 y = +114.921 x = -114.303 y = +3.034 And the equivalent calibration fiducial coordinates were x = +0.028 y = +113.029 x = -112.976 y= -0.013

79. 2-D photocoordinate measurement A scale coefficient can be computed by Scale = fiducial dist. / comparator dist. By Pythagoreum theorem Fiducial dist. = 159.839 Comparator dist. = 159.774 Scale = 159.839 / 159.774 = 1.00041 Greater than one means the photo has shrunk so it needs mathematical enlargement

80. 2-D conformal transformation 4 unknowns (1) scale – accounts for film shrinkage/expansion (2) rotation (3) x translation (4) y translation Conformal means a horizontal angle stays the same value before and after transformation as only one scale exists

81. 2-D conformal transformation xp = s*x*cos(t)-s*y*sin(t)+Tx yp = s*x*sin(t)+s*y*cos(t)+Ty Where xp, yp = photocoordinates x,y = comparator (assumed) coor. s = scale, t = rotation angle, Tx = x translation, and Ty = y translation

82. 2-D conformal transformation Solving for the rotation angle mathematically turns the equation into non-linear form, which is harder to solve. Fortunately we can substitute a=s*cos(t) and b=s*sin(t) to turn the equation into a linear form of xp = a*x-b*y+Tx yp = b*x*+a*y+Ty which is linear and much easier to solve.

83. 2-D conformal transformation Each coordinate is a measurement. To solve the transformation we need the number of measurements to be greater than or equal to the number of unknowns – 4 in this case. Thus two measured fiducial marks are 4 measured coordinates so the transformation can be solved for

84. 2-D conformal transformation One measured fiducial mark can only solve for an estimate of x and y translation. No scale or rotation can be resolved as thos quantities require distance and direction – both require two points Two measured fiducial marks generate 4 equations that can be uniquely solved for the 4 unknowns. Once scale, rotation, and the two translations are solved for, any measured comparator coordinates can be converted to photo coordinates.

85. 2-D conformal transformation BUT!!!! Two measured fiducial marks afford no check – a blunder would be undetected mathematically Three or more measured fiducial marks (note 4 or 8 fiducial marks exist on metric film cameras) afford a redundant solution Redundant solutions are generally solved by least squares, which minimizes the “sum of the squares of the weighted residuals”.

86. 2-D conformal transformation In our case all measurements are treated as equally weighted as measured with the same equipment on a unique photo. Each fiducial coordinate will have a residual computed for it which is how it misfits the least squares best fit results. Residuals are estimates of data quality. With only 2 measured points residuals are zero as no redundancy exists.

87. Magnitude of residuals Reasonable magnitudes become logical through many measurements. Based on current measurement technologies, photo coordinate residuals should be in the 3 – 15 micron range. Residuals larger than normal are a product of measurement error, incorrect entry of fiducial calibration coordinates, or a problem with film processing or scanning (example – film flattening mechanism was not functioning)

88. Unique properties of film Film grain actually has a proven tendency to shrink/expand different amounts in the x and y directions. This means the assumption of one scale in the 2-D conformal transformation can be considered invalid for film. Therefore the 2-D affine transformation can be considered more valid for inner orientation of a film derived image (scanned data is derived from film)

89. 2-D affine transformation 6 unknown transformation parameters 2 scales 2 rotations 2 translations Thus measurement of two fiducial marks does not afford a solution as 4 measurements cannot solve for 6 unknowns.

90. 2-D affine transformation 3 measured fiducial points results in 6 measured coordinates thus a unique solution (no redundancy) 4 or more fiducial points results in a redundant solution, a least squares best fit solution, and analysis of residuals A horizontal angle may not be preserved before/after transformation as two scales are being utilized in the transformation

91. 3-D coordinate transformation Uses (1) convert assumed ground coordinates (such as derived from parallax equation) into ground survey based coordinates (usually in a defined map projection system) (2) Merge 2 arbitrary coordinates systems (2 overlapping stereomodels) into one system Arbitrary 3-D coordinates derived from a stereomodel as usually called “model” coordinates

92. 3-D coordinate transformation A 3-D coordinate system is defined by (1) an origin (fixes 3 coordinates or 3 coordinate translations) (2) direction of the 3 coordinate axes (fixes 3 rotations about each axis) (3) Scale – Photogrammetry does not use a EDM or tape which defines scale in ground surveying 3 translations + 3 rotations + 1 scale = the 7 unknowns of the 3-D coordinate transformation

93. 3-D coordinate transformation 7 unknown transformation parameters require 7 coordinate measurements to be made 2 common 3-D points only yields 6 measured coordinates – one measurement short of a unique solution Two 3-D points does not define a vertical datum so the third point for a unique solution only requires an elevation

94. 3-D coordinate transformation 3 or more 3-D points measured in both systems enables a redundant solution. Therefore a least squares solution yields residuals for each measured point with coordinates in both systems Blunders in image identification, assigning coordinates to the wrong control point identifier, and errors in the field survey can all lead to residuals which indicate blunders

95. 3-D coordinate transformation Could a 3-D affine transformation exist? Mathematically – yes. Logically no as 3-D coordinate transformations are not used for measurements of photo coordinates The unique scale property of film thus does not apply to 3-D coordinate transformations in photogrammetry

96. Collinearity – the reality of analytical photogrammetry Collinearity – the ground point, the nodal point of the lens, and the image point all lie in a straight line The ground point is defined by 3 unknowns – X, Y, Z The image point is defined by 2 measurements – x,y photocoordinates The nodal point of the lens is called the exposure station. It has six unknowns – 3 coordinates – X,Y,Z (in the same system as the ground point) and 3 rotations defining the direction of the camera axis relative to the ground coordinate system. The rotations are historically Greek letters omega phi kappa about X, Y, and Z respectively. We will let W, P, K represent omega, phi, kappa respectively. Note kappa (about Z) relates mostly to the direction of flight relative to the ground coordinate system.

97. Collinearity For a point a on a photo derived from exposure L xa,ya = a’s photocoordinates XA,YA,ZA = A’s ground coordinates WL, PL, KL, XL, YL, ZL = exposure L’s unknown camera angles and coordinates

98. The Collinearity condition xa = f (WL, PL, KL, XL, YL, ZL, XA,YA,ZA) ya = g (WL, PL, KL, XL, YL, ZL, XA,YA,ZA) Where f and g represent mathematical functions The measured photo coordinates are a function of the exposure station unknowns and the ground station unknowns. This is an observation equation – the measurement is described in terms of the unknowns.

99. Ground control and collinearity Ground control is either (1) targeted (a plastic, fabric, or painted cross or “T”) in an open location such as a road intersection or parking lot. This requires marking the control point before the flight. (2) Photo id’s or “Picture points” – monoscopically identifiable points are located after the flight has occurred – manholes, sidewalk intersections, end of paint stripes, etc.

100. Exposures and collinearity Historically the exposure station unknowns had to be solved based on measured ground control coordinates. Today airborne GPS and IMU (Inertial Measuring Unit, i.e. gyroscopes) are used to measure the exposure station position and orientation. GPS-IMU has accuracy limitations and thus cannot be relied on for precise engineering product.

101. Collinearity – solving it Collinearity is what is called a non-linear solution which is difficult to solve. An example of two linear equations are X + 2Y = 5 X - Y = -1 An example of two non-linear equations are Sqrt (X) + Sine (Y) = .12 Cosine (X) - .2 * Y = -.17

102. Collinearity – solving it Linear equations have no powers or trig functions, and thus can be solved by adding multiples of equations to each other to eliminate terms Non-linear equations due to powers and/or trig functions cannot be solved by adding multiples of equations to each other – the unknown terms cannot be eliminated

103. Collinearity – solving it Non-linear equations are solved through a calculus process called linearization. To use this process (1) approximations must exist for all unknown ground coordinates and exposure unknowns (2) the solution solves for updates to the approximations, and iterates until the updates to the approximations become insignificant

104. Collinearity applications (1) Relative orientation – solving for the relative relationship of a right photo to an arbitrarily held fixed left photo (2) Bundle adjustment – simultaneous solve for all ground coordinates of all points with measured photocoordinates along with all exposure unknowns

105. Historical Aerotriangulation (AT) Control densification by photogrammetry to minimize ground control requirements and validate harmony of ground control and photocoordinate measurements Today measured exposure station unknowns by GPS-IMU are included Requiring multiple control points in every stereomodel would be cost and time ineffective if AT can provide “bridged” control between a sparser ground control network

106. Historical Aerotriangulation (AT) Consisted of 3 primary mathematical steps after all photocoordinates were measured (1) relative orientation – create unique assumed 3-D model coordinates for each stereomodel (2) (a) strip/block adjustment – combine all unique model coordinates systems into one combined unique coor. system using common points between stereomodels and flight lines Strip combines models along a flight line Block combines flight lines after joined by Strip (2) (b) strip/block adjustment – convert the common assumed system into the ground coor. system using measured ground control points

107. Historical Aerotriangulation (AT) step 3 Bundle adjustment – the simultaneous least squares best fit of all photo coordinates, measured ground control coordinates , and measured exposure station coordinates solving for any remaining unknown parameters The bundle adjustment was too mathematically intense till main frame computers existed, and is of course routine on today’s personal computers Relative orientation and the strip/block adjustment provide the initial approximations for all unknown parameters in the bundle adjustment

108. Historical Aerotriangulation (AT) Three unknown points are selected near the middle of a photo, one each near the top, middle, and bottom of the photo – called pass points as they pass control These three points could be distinct images, or could be artificially marked by a small drill into the photo called a “pug” as the instrument is called a pug machine Due to 60% overlap, these three points will appear in the photo immediately left or right along the flight line (end photo only overlaps in one direction) If multiple flight lines the top and bottom points usually appear in the overlap between flight lines (20% overlap is common across flight lines) – these pass points are also tie points as they “tie” across flight lines

109. Historical Aerotriangulation (AT) A point in the overlap region across flight lines could potentially be on 6 photos, 3 photos on each line, due to 60% endlap along the flight lines and 20% sidelap across flight lines Along a flight line points in the middle of a photo can be viewed in 2 stereomodels as it is a right photo in one stereomodel and a left photo in the second stereomodel It is imperative these points in the middle of the photo are not too far left or right to not be in two consecutive stereomodels

110. Historical Aerotriangulation (AT) A pug (artificial) mark takes advantage of superimposition When viewed in stereo, the pug mark “appears” to be in the photo in which it is not marked, and thus in stereo can be measured on the un-marked photo If the point is monoscopically identifiable measurement does not require stereo viewing In addition to pass points, ground control points are also measured when viewable

111. Historical Aerotriangulation (AT) Thus a stereomodel has six pass points in it, plus possibly some control points. The left three points overlap into the next stereomodel to the left. The right three points overlap into the next stereomodel to the right. A unique point is assigned the same point name/identifier no matter what photo it is measured on. Usually the point name is derived from what photo’s center the point is on. Control points usually use the point name assigned by the surveyor who put ground coordinates on it.

112. Relative Orientation Involves only one stereomodel (2 photos) The six exposure unknowns of the left photo are all held “fixed” at zero. In addition the X coordinate of the right photo is held fixed at an arbitrary distance to define scale. Note we have fixed 7 parameters – six on the left exposure and one on the right exposure – 7 parameters uniquely describe a 3-D coordinate system.

113. Relative Orientation Each measured point has three unknowns in collinearity – ground X,Y,Z – though in relative orientation it is called model X,Y,Z as the ground control coordinates are not used Each measured point generates 4 collinearity measurements – photo x, photo y on the left photo; and photo x, photo y on the right photo

114. Relative Orientation n = # unknowns = 5 unknowns for the right exposure + 3 * # of measured points m = # measurements = 4 * # points # points = 1 m = 4 * 1 = 4 n = 5 + 3 *1 = 8 8 unknowns and 4 measurements cannot be solved as # measurements be equal to or greater than # unknowns

115. Relative Orientation 2 measured points m = 8 n = 11 3 measured points m = 12 n = 14 4 measured points m = 16 n = 17 5 measured points m = 20 n = 20 so uniquely solved (not redundant) 6 measured points m = 24 n = 23 redundant so residuals can be calculated! Note 6+ points are measured in each stereomodel

116. Relative Orientation Thus after relative orientation is completed each stereomodel has X,Y,Z model coordinates for each measured point Unfortunately each stereomodel is its own unique arbitrary coordinate system Fortunately overlapping stereomodels have 3 common points between them (the 3 points down the center of the common photo)

117. Strip Adjustment A series of 3-D conformal coordinate transformations to produce one common arbitrary coordinate system Three common points to overlapping stereomodels create 9 measurements (3X, 3Y, 3Z coordinates). The 3-D conformal transformation has 7 unknowns so a redundant solution enables the transformation to occur and residuals calculated to assess data quality

118. Strip Adjustment Sequentially all stereomodels are converted into one arbitrary coordinate system by one 3-D conformal transformation after another. The block adjustment uses tie points across flight lines and the 3-D conformal transformation to put all flight lines/all stereomodels into one common arbitrary coordinate system. Redundancy allows calculation of residuals in every transformation.

119. Strip/Block Adjustment to Ground Note control was also measured during the relative orientation/aerotriangulation collection process so these points have model X,Y,Z coordinates for them. These control points also have ground X,Y,Z coordinates. They thus allow a 3-D coordinate transformation of the arbitrary coordinate system into the ground coordinate system. Hopefully redundant control stations (3 or more) enable calculation of residuals that test the consistency of the ground control. At this point all measured AT points have ground X,Y,Z coordinates and exposure stations can all have values calculated for their unknown parameters. The initial approximations for the bundle adjustment have thus been calculated!

120. Strip/Block Adjustment to Ground Historical Polynomial correction were often added during the adjustment of model coordinates to ground to attempt to correct the sequential error build-up of multiple 3-D conformal transformations The polynomial corrections were used because the bundle adjustment was too computationally intense before modern computers Today the bundle adjustment occurs next, eliminating any sequential error build-up of the multiple coordinate transformations. Thus polynomials are not required to simply generate initial approximations for the bundle adjustment.

121. Bundle adjustment The relative orientation and strip/block adjustment help find blunders and set up the bundle adjustment with initial approximations Possible input to bundle adjustment Photo coordinates – collinearity Ground Control coordinates Exposure station coor. and angles (if airborne GPS-IMU is used)

122. Bundle adjustment Output of bundle adjustment Ground X,Y,Z of any points with measured photo coordinates W,P,K,X,Y,Z of all exposure stations It is a simultaneous least squares solution so does not contain the problem of systematic error buildup as in rel. or., strip, block adjustment process

123. Bundle adjustment Least squares is enhanced by proper user defined error estimation Photo coordinates error est. of 3 -20 microns depending on quality of imagery and abilities of photogrammetrist Ground control error estimates are desired to be fixed (0.001 ft. is mathematically the same as fixed) but in reality no survey product is perfect, and no photogrammetrist can measure exactly the same point the surveyor measured as images have to be visually “interpreted: Exposure station coordinates and angles are derived from the GPS-IMU processing that is measurement based and thus is not perfect

124. Bundle adjustment Ground control error estimation Larger error estimates could be place on picture points than targeted control as a target is better defined both in a ground survey and in measuring an image on a photograph Larger error estimates would be placed on traverse derived coordinates vs. fixed ambiguity GPS coordinates Larger error estimates would be placed on trig. Leveling derived elevations vs. those derived from differential leveling

125. Bundle adjustment Least squares minimizes the sum of the squares of (weight * residual) Where weight = 1 / error estimate Note residual / error estimate is unitless. This allows the proper mixing of different types of measurements (photo coordinates, ground control coor., exposure measurements) properly in one simultaneous analysis

126. Bundle adjustment control requirements - Historical Pre airborne GPS-IMU Mathematically 2 X,Y,Z control points and 1 Z only control point define a 3-D coordinate system and will allow the bundle adjustment to function Unfortunately no check in the quality of the control coordinates would exist Errors would propagate significantly on a large job where the control is separated by significant amounts of photos

127. Bundle adjustment control requirements - Historical Pre airborne GPS/IMU Mathematically bundle adjustment is weaker in vertical than horizontal because the triangles created by collinearity are longer and skinnier in the vertical dimension Thus rule of thumb realistic to prevent significant error propagation – horizontal control every 4-6 photos, vertical control every 3-4 photos. More vertical control attempts to tighten up the weaker vertical geometry of AT/collinearity Realistically using GPS all control is 3-D so concept of a 2-D or a 1-D only control point goes away.

128. Bundle adjustment control requirements – using airborne GPS-IMU Mathematically no ground control is required as the exposure unknowns have been determined Realistically control should be placed in the 4 corners of the job to provide a realistic check on the GPS-IMU solutions Note GPS-IMU is not yet accurate enough to serve as control for precise engineering design photogrammetric projects

129. Bundle adjustment control requirements – using airborne GPS-IMU Note one of the reasons for relative orientation and strip adjustment was to generate approximations for exposure station unknowns If GPS-IMU is used, the exposure unknowns become measured. They can be used to solve for any ground coordinate approximations using collinearity, and thus the need for relative or. and strip adjustment prior to the bundle adjustment is eliminated.

130. Using error estimation in the bundle adjustment to find blunder Robustness is this procedure (1) adjustment runs with user defined error estimation (2) new error estimate = (old error estimate plus abs. value (residual)) /2 Old error estimate and residual are averaged to create new error estimate Strong measurements are held tighter, weak measurements are held weaker, and the poorer measurements are filtered out as potential blunders. Larger error estimates allow larger residuals to exist on the weaker measurements.

131. Bundle adjustment Residual rule of thumb (all adjustments) If a residual is more than three times its corresponding error estimate you are 95% confident something is suspect Residuals will be larger than its corresponding error estimate approximately 33% of the time – in other words at one standard deviation residuals will be less than their error estimates 67% of the time So do not worry if a residual is larger than an error estimate – start worry when it starts to get near three times the size of the error estimate

132. Bundle adjustment But be careful – Example Unusually large photo coordinate residuals exist on a ground control point Re-measuring the photo coordinates on that point produced no different results If the control point’s ground coordinates were held fixed, it is very possible that is the source of the problem. By fixing the control it cannot adjust – residuals are zero- so the misfit is converted into the photo coordinates as those were assigned reasonable error estimates.

133. Absolute orientation Before map compilation one additional mathematical step is required called absolute orientation It is a one stereo model coordinate conversion to ground – thus a 3-D conformal transformation All bundle adjustment points will have model coordinates from relative or. and ground coor. from the bundle adjustment Thus at least six points from the bundle adjustment will exist in each stereomodel so absolute orientation is very redundant.

134. Absolute orientation Absolute orientation validates how the model coordinates fit the final bundle adjusted ground coordinates with a 3-D conformal transformation without polynomials Residuals are an indicator of how good mapped positions will be both horizontally and vertically More importantly it shows how mapped positions in successive stereomodel overlap regions, or in flight line overlap regions, fit each other.

135. Absolute orientation Fit across models example Vertical Point 121 in model 1-2 has a vertical residual of -0.20 ft. Point 121 in model 2-3 has a vertical residual of +0.31 ft. Rule of thumb is control should fit to better than ½ the desired contour interval

136. Absolute orientation -0.20 and +0.31 residuals are less than ½ the contour interval BUT!! The difference in residuals on the same point (note one is negative and one is positive) indicate a misfit across stereomodels of 0.51 ft. in the region around that point! The ½ contour interval rule has been exceeded!

137. Absolute orientation Horizontal equivalent of overlap model misfit In model 1-2 point 121 has ground X and Y residuals of -0.23 and +0.27 ft. In model 2-3 point 121 has ground X and Y residuals of -0.02 and -0.31 ft. Horizontal misfit across stereomodels is the distance based on Pythagoreum Theorem Sqrt [(-0.02 – (-0.23))2+(-0.31-(+0.27))2] = 0.62 ft. Thus the difference in position between models is significantly more than the individual residuals.

138. Absolute Orientation If at least 2 X,Y,Z and 1 Z only control points exist in an stereomodel absolute orientation can be performed without prior aerotriangulation (the control in the model has to satisfy the 3-D conformal transformation) This is called a “full fielded” stereomodel.

139. Stereoplotter Orientation without Aerotriangulation (1) Inner Orientation – measure fiducials to convert comparator coordinates to photo x,y coor. (2) Relative Orientation – measure at least 6 common identifiable points across the entire stereomodel – produces model x,y,z coordinates (3) Absolute Orientation – measure control points – 3-D conformal of model x,y,z to ground X,Y,Z Control points can be measured as part of (2) combining (2) and (3) into what is called Exterior Orientation

140. What happens in map collection A point on the right photo can only be measured along an epipolar line given a left photo x,y (the original parallax concept) Comparator coordinates convert to photo coordinates. Left and right photo coordinates convert to model coordinates. Model coordinates convert to ground coordinates via the resolved orientations.

141. Vector (Map) compilation Vector (points, lines, and text) is one form of map product derived from photogrammetry A user digitizes vector information in stereo viewing at recognizing the feature code/attribute of an image Points (manholes, trees, light poles, power poles, hydrants, etc.) are point feature codes represented by a user defined symbology at a user defined scale. In computer aided drafting points normally are stored in a layer/level associated with the feature code name, and the symbol is a defined block/cell.

142. Vector (Map) compilation Lines (centerlines, pavement edges, curbs, sidewalks, power lines, etc.) are line feature codes that are associated with symbology of color, line width, line style, straight/curve, etc.) Lines are segregated in computer-aided drafting by a feature code being assigned with a layer/level.

143. Vector (Map) compilation Tricks in software to enhance mapping (1) close a line – obvious use is buildings – after digitizing last point automatically close the building (back to the first point) (2) make angles 90 degrees – for certain features (buildings are a great example) if a corner is within a user defined angle of 90 degrees make it a 90 degree angle (3) parallel line offset – great on roadways for centerline, pavement edge, curbs, sidewalks, ditches, etc, that are parallel – should include option for a vertical offset (such as on curbs) This can include how certain features (sidewalks) intersect other features (driveways)

144. Vector (Map) compilation Tricks in software to enhance mapping (4) Extend undershoots and trim overshoots – This is also usually associated with how certain feature codes interact with other feature codes Example – a driveway should intersect the edge of a building but when digitizing the driveway will be short or past it. Automated software can fix these while collecting the data

145. Vector (Map) compilation Tricks in software to enhance mapping (5) Connect end points near each other - snap (near defined by user defined distance input) Example – Two sidewalk edges were digitized but connect at a common point collected on two different lines – If these two endpoints are within the user defined tolerance it should be snapped together Many line joins occur when features continue across distinct stereomodels or flight lines

146. The other superimposition The display of vector information superimposed on the raster stereomodel is the perfect way to see if all information has been digitized Superimposition is also used in map updating – a new flight is viewed with an old digital map superimposed on it. Changes due to construction, etc. can be seen and updates made to the existing digital map information Prior to computerization the display of vector information overlaid on raster information was very impractical.

147. The traditional work flow (1) large format calibrated aerial cameras with a specific mount in an airplane Today this could be a digital camera. Today a film based or digital camera could have GPS-IMU also in the ariplane to solve for exposure station position and orientation

148. The traditional work flow (2) processing of photogrammetry in one-to-one production of images on film or glass diapositives specifically designed for minimization of distortion during production and due to temperature, pressure, and humidity changes Today film imagery uses a high precision scanner to convert to a digital format Image from a digital camera are already in a computer format (usually Mr. Sid, JPEG, TIFF, etc.)

149. The traditional work flow (3) a realistic amount of ground control which is either targeted or photo identifiable Control accuracy requirements are a function of flying height and desired product accuracy Ground control requirements can be minimized in higher altitude lower accuracy jobs by airborne GPS-IMU

150. The traditional work flow (4) a measurement and ensuing least squares analysis process called aerotriangulation which validates the ground control and densifies it to a suitable point for use in stereoplotter orientation and map compilation, Small jobs may be “full fielded” with control and aerotriangulation can be by-passed GPS-IMU may eliminate the need for aerotriangulation in lower accuracy jobs

151. The traditional work flow (5) stereoplotter orientation based on the densified ground control which resolves the relation of the photos to each other and the ground at the time of exposures and provides a check on the quality of the aerotriangulation This is completely automated if aerotriangulation and/or GPS-IMU exists. This can be used to estimate horizontal and vertical misclosures between adjacent stereomodels and across flight lines.

152. The traditional work flow (6) compilation of the desired features which could include planimetric features (line and point symbology), contours, cross sections, profiles, break lines, spot elevations, and text information It is simply a feature code based collection approach similar to topographic survey collection Many map clean-up functions (connecting edges of pavement across stereomodels, driveways intersecting buildings, etc.) are automated during the collection process

153. The traditional work flow (7) clean-up of compiled information to make it topologically pleasing (an edge of driveway should not extend past the outline of a house), The automatic process obviously will not resolve all map editing Text, feature code tables, etc. can also be added during this process

154. The traditional work flow (8) addition of field survey data where the data could not be collected photogrammetrically due to obstructions, cover, shadow, etc., A photogrammetrist can only map what one can see. If underground utilities are required they are added in this process.

155. The traditional work flow (9) production of final products which could include translation to other digital map formats and/or hard copy output. Typical output formats are AutoCad drawing (.dwg, .dxf), Microstation drawing (.dgn), and ESRI shape files (.shp) Today digital image products such as orthophotos are another standard product (will be discussed in soft copy photogrammetry)

156. The traditional work flow In translation to final output format softwares usually allow user definition of product feature code, color, symbology, layer/level, line style, line type, etc. Example – During collection buildings may be collected with feature code BLDG, color red, standard line type and be converted to feature code BDG, layer BUILDING, color green, and dashed lines.

157. Evolution of stereoplotters An instrument designed to measure on photos, view in stereo, orient photos, and create maps (plot) (1) Analog stereoplotters – example Kelsh- dials and large twist mechanisms for angular orientation, scale and elevation by raising/lowering bar with photos or table, no photocoordinate measurement, map compilation direct with image shining on paper laid down on a large table – 100% manual

158. Evolution of stereoplotters (2) 2nd generation analog – Kern PG-2 as example – all dials had numerical readouts so orientation parameters could be viewed and manually recorded enabling quicker repeat of orientation; defined oculars with magnification while viewing in stereo; manual photocoordinate readout; map compilation offset from viewing system connected with bars and cables

159. Evolution of stereoplotters (3) semi-analytical – Kern PG-2 with encoding Encoding means the dials were connected with wires to computers that could read the information First digital map compilation; all orientation was still manual dialing but computer could provide suggestions to dial readouts

160. Evolution of stereoplotters (4) analytical – Kern DSR-1 is example No dials, all orientation is mathematical Two precise monocomparators control photocoordinate measurement Once oriented, only inner orientation required as photo place on comparator again (in slightly different place) Digital mapping only, superimposition in oculars where image is viewed through Limited zoom/magnification Digital mapping only, superimposition in oculars where image is viewed through Limited zoom/magnification Still all manual measurement

161. Evolution of stereoplotters Digital stereoplotters No film, only scanned image viewing on computer scree Infinite zoom (to pixel level) Computer screen superimposition Computer and viewing glasses are only hardware so no dedicated unique hardware Image enhancement tools Computer software can attempt to make some measurements via pixel matching/image identification No need to ever re-orient a stereomodel as all parameters are digitally stored for a scanned image

162. Historical Elevation Collection in Photogrammetry Before modern computerization elevation product was in the form of contours, profiles, and cross sections. In some cases volume determination was made by a grid of collected elevations, but this is really no different than a series of cross sections. Profiles and cross sections were simply measurements of elevation (viewing in stereo and measuring with the floating mark). If a desired point could not be collected due to trees or shadows you either skipped that point or collected a point as close to it as possible and assumed it was at the same elevation. One would assume contours would be interpolated from spot elevation collect but instead ….

163. Historical Elevation Collection in Photogrammetry Stereoplotter operators attempted to “trace” a contour by visualizing the floating mark touching the ground at a series of same elevations. This procedure works great along a bank, and does not work well in a flat area. But in a flat area it does not matter where the contour is shown as it is within a reasonable tolerance anywhere on that flat surface. Drafters would trace over the stereoplotter drawn contour as it was usually jagged (due to trying to follow a line of elevation visually) to smooth it out to cartographic standards

164. Modern Elevation Collection in Photogrammetry Manual elevation collection in photogrammetry today involves collection of points and break lines to build a 3-D model from which any contours, profiles, cross sections, volumes, etc. can be extracted Certain feature codes in mapping lend them to being elevation points (ground shot, manhole, drain, etc.) and break lines (curbs, pavement edges, sidewalk edges, ditches, etc.) Certain feature codes in mapping are generally not elevation points (power poles, light poles, etc.) and not break lines (fences, buildings, power lines, etc.) Thus elevation collection can be mostly defined by the feature coding process of topographic mapping

165. Digital Terrain Models Based on the concept of Delauney Triangulation Connect nearest points to each other to create triangles that never cross each other and are as close to equilateral as possible. On edges of data sets it is impossible to create near equilateral triangles.

166. Digital Terrain Models A triangle is the smallest form of 3-D surface. Along the triangle edges, and across the inside, elevation change is linear between end points. This creates a plane in 3-D (not flat in elevation) for each triangles. Connecting triangles together builds a 3-D surface.

167. Digital Terrain Models Contours can be interpolated linearly along triangle edges. If a contour enters a triangle it must exit it on one of the other two sides. Profiles and cross sections are calculated whenever the line crosses a triangle again by linear interpolation Volumes can be calculated when two 3-D surfaces exist as the space between them is a volume.

168. Digital Terrain Models Break lines represent an abrupt change in elevation such as top and bottom of curbs or ditches. The break line is the chain along the top or bottom of these features. Delaunay triangles cannot cross a break line creating the change in grade perpendicular to the break line. Note break lines prevent near equilateral triangles from being created, but represent human’s altering of a smooth ground surface. All other described principals still apply when break lines exist. Collection of break lines often minimizes the need for significant amounts of spot elevations.

169. Airborne GPS-IMU Obviously static GPS will not satisfy the needs or positioning a camera in an aircraft. Kinematic GPS means each epoch is it own unique solution; kinematic GPS utilizes a base and rover to produce vectors from the base to the rover. If the base has known coordinates, the GPS vector to the rover enables calculation of each rover position

170. Airborne GPS-IMU Historically (early 1990’s) the rover unit had to be set up on a known point before the flight mission started Mid 1990’s saw the development of OTF – On-The-Fly ambiguity resolution – the vector could be resolved to survey accuracy without rover occupation of a known point, and in theory the ambiguities can be resolved without the rover ever in a static position

171. Airborne GPS-IMU Could RTK be used instead of post-processed kinematic Radio based RTK (claims of 6 miles) would never support the base to rover distances Cell coverage for RTK corrections might work but remember on commercial airlines your phone has to be off while in flight Data collection for RTK is really not set up for fast collection rates of massive amounts of points as that is not how you do a ground survey

172. Airborne GPS-IMU Goal in all precise GPS is to reach a fixed ambiguity solution – which means the number of wavelengths from satellites to both base and rover have been exactly resolved. If a plane does not bank hard, and assuming there are no obstructions on a runway, a rover should never lose satellite lock and therefore maintain a fixed ambiguity solution In reality, the airplane usually stays static prior to flight, and can return to static after a flight, making ambiguity resolution easier.

173. Airborne GPS-IMU The airborne GPS receiver and the camera are time synchronized – thus GPS knows when an exposure is occuring The GPS receiver is not at the exposure station. Thus when an airplane is mounted with a GPS receiver and camera for the first time the 3-D offset from phase center of GPS antenna to nodal point of the lens must be surveyed.

174. Airborne GPS-IMU The exposure does not go off at an exact GPS epoch. Thus the historical approach was to interpolate between epochs to determine where the antenna was at the exact time of exposure. Today IMU can be used to measure thousands of coordinate differences between GPS epochs, and it can then be used to better calculate the antenna position at the time of exposure IMU, i.e. very precise gyroscopes is also used to very precisely measure camera angles at the time of exposures

175. Airborne GPS-IMU Today CORS stations are routinely successfully used as base stations in airborne GPS processing. This eliminates the need to set a base near the job site in an open area. The major improvements in kinematic GPS processing in the last few years have been the ability to process longer base to rover baselines, enabling use of CORS as base stations

176. Airborne GPS-IMU Single frequency receivers have been successfully used on the aircraft as long as satellite lock is not lost. Single frequency takes longer to gain or re-gain ambiguity solution. But waiting for clearance on a runway is usually long enough (10 minutes) for a single frequency receiver to resolve ambiguity

177. Airborne GPS-IMU The future in airborne GPS processing will be Precise Point Positioning (PPP). No base stations, thus no vectors are calculated. Instead a precise satellite ephemeris is used along with new point positioning mathematics, to resolve ambiguities. This algorithm is free and currently available from the Canadian equivalent of the National Geodetic Survey.

178. Airborne GPS-IMU calibration Called a bore site – it is simply a dense series of precisely coordinated targets in a local area A traditional aerotriangulation solves for the exposure unknowns, which can be compared to what is derived from airborne GPS-IMU IMU can become systematically off over long periods of time. The bore site resolves the systematic error in the calibration process.

179. Coordinate systems and systematic error correction The most common form of “ground” coordinate system is Universal Transverse Mercator or state plane projection systems. Elevation is not perpendicular to a map projection system. Therefore a correction for earth curvature exists in all photogrammetric computations. This works because the size of the earth is well defined by an ellipsoid model. Since light passes through an atmosphere and bends, a correction for atmospheric refraction needs to be exist in all photogrammetric computations. While the atmosphere is variable depending on weather, this correction tends to be very stable for modeling the bending of light.

180. Coordinate systems and systematic error correction For many purposes the correction for earth curvature and atmospheric refraction can be considered insignificant. But since we are using computers correcting for it is very simple. Over long distances a better vertical datum in photogrammetry would be ellipsoid height as it relates directly to GPS. The conversion to elevation/orthometric height would use a geoid model.

181. Coordinate systems and systematic error correction Mathematically photogrammetry is a true cartesian (right angle in all three directions) coordinate system. In surveying elevation is not perpendicular to the map projection but we correct that with an earth curvature equation The “best” coordinate system would be a local cartesian X,Y,Z. This could be calculated directly from GPS derived geocentric X,Y,Z coordinates (GPS’ native coor. system) using a 3-D conformal transformation where scale is fixed at 1.00 Usually the local cartesian system is not justified as the systematic error correction for earth curvature and atmospheric refraction does not diminish desired accuracies

182. LIDAR An alternative to traditional photogrammetric elevation collection LIDAR stands for LIght Detection And Ranging and is often called laser scanning. LIDAR can be static collection on the ground, mobile on a ground vehicle, or collected from an airplane. In airborne LIDAR applications GPS-IMU is used to determine the location and orientation of the LIDAR unit. LIDAR can produce same accuracies as photogrammetry.

183. LIDAR With the GPS-IMU determining sensor position and orientation, the LIDAR measures a distance via reflection, and thus enables computation of the 3-D position of the point it reflected from. Modern LIDAR can measure tens of thousands of points per second. LIDAR also measures the intensity of the returned light, which enables the creation of an image in addition to spot elevations

184. LIDAR DEM means digital elevation model which is a DTM usually assumed to be in a regular grid Modern LIDAR can receive multiple elevations derived from multiple “returns”. First Return DEMs: These are the first returned values which measures the elevation of the top of the building, canopy and other obstructed surface. Bare Earth DEMs: These are derived secondary returned values at lower elevation. The last returned value is assumed to be the ground though in some cases this is not true and this data needs to be removed as ground information

185. LIDAR Comparison of nearest neighbor data can also eliminate non-ground points Example – Two points are neighbors (hundreths of a foot apart horizontally) and differ in elevation by 10 ft. The higher elevation is definitely canopy, building, etc.

186. LIDAR LIDAR does not collect breaklines, but with data this dense it can be argued the breaklines are not necessary. LIDAR does not feature code. A user has to select features (as with stereoviewing of photographs) and assign feature codes to information.

187. Soft copy (digital) photogrammetry measurement enhancements Infinite zoom does make it possible to make more precise manual measurements in some cases but: A digital photo is a series of pixels that have gray tone or color characteristics that permit some automated measurement (1) Find a known image pattern – The classic example is a fiducial mark. It has a consistent appearance in a very defined region of the photo. Thus pattern recognition enables automated measurement of fiducial marks, hence inner orientation can be automated.

188. Soft copy (digital) photogrammetry measurement enhancements (2) Image matching in overlapping photos Through gray tone or color matching of a series of pixels on two overlapping photographs, it is possible for computer software to “match” pixels in the same way an operator matches pixels in stereo by placing the floating mark on the ground. If a point is in multiple stereomodels or flight lines, this matching process can continue. Example – Software could identify by image matching the same ground point as six individual pixels on six photos (overlapping stereomodels and flight line)

189. Soft copy (digital) photogrammetry measurement enhancements At first it appears matching common imagery in two overlapping photos requires comparison of many pixels But using the properties of a steromodel (1) Once relative orientation is complete, or GPS-IMU is used for measurement of orientation, the corresponding epipolar line defines a line of pixels, parallel to the flight line, that a image match must exist on. Thus this eliminates all but a line of pixels for a match. Resampling is a process where the pixels are reoriented so a viewer in stereo changes parallax along an epipolar line parallel with his viewing. This keeps the pixels from having to shift up or down to adjust to the epipolar line not being exactly parallel with the photocoordinate x axis.

190. Soft copy (digital) photogrammetry measurement enhancements Continuing with properties of a stereomodel (2) The matching pixel can only be a fraction of the pixels along that epipolar line because one knows the approximate overlap, and that the parallax change is limited in magnitude by changes in elevation are limited relative to the flying height. Example – For a 1200 ft. flying height one would not expect elevation change of 600 ft. in a stereomodel. Unless very large buildings or very mountainous terrain exists, that possible elevation change across a stereomodel is probably less than 100 ft.

191. Soft copy (digital) photogrammetry measurement enhancements Thus limited parallax change limits how many pixels need to be searched along an epipolar line for an image match. (3) Once a match is made, that pixel cannot be used for matching to that overlapping photo anymore. Thus the more successful matches that have been made limit the search range that needs to exist, and the number of pixels that need to be considered as matches can be removed from the search.

192. Soft copy (digital) photogrammetry measurement enhancements When does image matching fail (1) When shadows or changing sun angle change the gray tone or color of the same point on overlapping photos. (2) When all pixels in a given area have the same gray tone or color. A newly paved road parallel to the flight line will have lots of pixels that look alike to a computer program

193. Soft copy (digital) photogrammetry measurement enhancements So where is automated image matching used? (1) Aerotriangulation – Pass points can now be totally computer measured. The algorithm can be extremely picky at matching because traditional AT only used 6 points per stereomodel. Since image matching usually is quite successful with even picky analysis, literally hundreds of AT pass points can now exist in overlapping photos because measurement is automated Note ground control points still are manually measured as they require human identification. Since the automated measurement process creates so many pass points beyond what is needed, it is possible to automatically run the bundle adjustment multiple times thinning out the highest photocoordinate residuals.

194. Soft copy (digital) photogrammetry measurement enhancements So where is automated image matching used? (2) “Auto” DTM collection- a little less picky in image matching than in AT – every image match generates a X,Y,Z based on collinearity and/or absolute orientation Instead of a operator manually collected DTM in stereo, the computer is attempting this process automatically

195. Soft copy (digital) photogrammetry measurement enhancements Limits to “Auto” DTM No break lines are collected Like LIDAR, points not on the ground are collected as the computer program cannot discriminate between buildings, trees, and ground. Engineering design DTM’s are thus usually collected manually as they only include the ground points and break lines. “Auto” DTM is better for elevation correction in orthophotography which will be discussed next.

196. Soft copy (digital) photogrammetry measurement enhancements How accurate can a digital image be measured. Current automated image measuring /processing can measure to ¼ of a pixel A pixel is the smallest image unit of the scanned image, or the smallest image unit resulting from a digital camera Example – a 30 micron pixel size will usually result in 30/4 = 7.5 micron automated measuring ability

197. Image based products pre-digital photogrammetry (1) Rectified photo – a diapositive was projected onto a plot of control points; the diapositive is moved inwards and outwards and tilted relative to the plot of control points till a visual best fit is achieved. This is an eyeball approach to scale and tip, tilt, azimuth correction. The projected image (corrected) is then photographed.

198. Image based products pre-digital photogrammetry (2) orthophotography – Note looking down at all points instead of a perspective view A stereoplotter is fitted with a camera that only takes a small narrow picture straight down of the image An operator manually corrects for elevation differences using the floating mark A narrow picture is taken straight down, and the series of narrow pictures is seamed together during the photography process (multiple exposures) to create an orthophoto

199. Image based products pre-digital photogrammetry (3) Mosaics – the seaming together of multiple raw photos, rectified photos, and/or orthophotos Prints that overlap are sliced with a razor blade along a desired common image line, and are then pasted together Touch up paint along the pasted seams is used to cover up the paste, and to make images from multiple photos match up better with gray tone Thus an entire job, consisting of many photos, can be turned into one mosaic

200. Image based digital products Delivering an image (raster) product instead of a vector product (1) raw scanned photo or photo directly from aerial digital camera – similar to a photo from a hand held camera except usually a lot more pixels/storage – this has use if being used for pictorial purposes only

201. Image based digital products (2) Scaled, rotated, and translated image to fit ground control – a 2-D conformal or 2-D projective (8 unknowns which relate 2-D systems that are not parallel to each other) This approach ignores elevation information and thus any elevation differences in the image will relate to error due to relief displacement being unaccounted for

202. Image based digital products (3) Rectified photo One average elevation is used in projecting the image to a defined projection such as state plane or UTM Differences from the defined average elevation will create error in image due to relief displacement

203. Image based digital products (4) orthophotography An elevation model (DEM, DTM, etc.) derives independent elevations for each pixel in projecting them to the defined map projection The quality, or lack of quality, of the elevation model defines the quality of the correcting the pixel to its datum position

204. Image based digital products How are elevations used to correct pixels to their projection location Recall collinearity xa = f (WL, PL, KL, XL, YL, ZL, XA,YA,ZA) ya = g (WL, PL, KL, XL, YL, ZL, XA,YA,ZA) The exposure unknowns are derived from airborne GPS-IMU or aerotriangulation Ground XA,YA,ZA is what the DEM/DTM is composed of

205. Correcting for elevation The right hand side of collinearity is thus defined, and the corresponding photocoordinates given a ground X,Y,Z can be calculated. If a ground X,Y,Z calculates photo x,y that is not on the photo image format – that point is not on this particular photo.

206. Correcting for elevation Where is that point’s correct projection position in terms of photo x,y? A projection is at a defined elevation or ellipsoid height. The latter is more correct but not as well understood by the general population. Most projections are defined at an elevation of zero! Thus a ground Z of zero is substituted for the actual point’s ground Z and the photocoordinates are recalculated by collinearity.

207. Correcting for elevation The point’s correct orthographic projection location has thus been determined based on the ground Z of zero. The image point has moved from a photocoordinate based on DEM/DTM elevation (perspective location) to where it would be if it could be viewed at an elevation of zero (on the projection – an orthographic location)

208. Correcting for elevation Vertical objects are a small problem in orthophotography Example – A power pole or building edge occupies 5 pixels on a raw photo. All 5 pixels need to be projected to the same location are all are at the same projection X,Y (vertical pole/building edge assumed) 5 pixels projecting to one pixel leaves no image for 4 pixels in the resulting orthophoto. This is because the image on the backside of the vertical object could not be viewed as it was blocked by the perspective nature of the vertical object

209. Correcting for elevation For better or worse, a standard DEM to use for orthophoto production are the free USGS DEM’s available for download on the Internet Most of these were converted by interpolating from contours on the original USGS quadrangle maps which were mostly compiled in the 1940’s-1950’s from flying heights of 25000 ft. using Kelsh stereoplotters including areas where lots of tree coverage exists. USGS DEM’s thus can have significant error in them that can only be validated through ground truthing Note error in ground X,Y also causes errors in orthophoto accuracy as the DEM will be associated an incorrect X,Y will a raw pixel.

210. Correcting for elevation The best DEM for orthophoto production is the “Auto” DTM produced from automated image matching of the actual photography that is being converted to orthographic. But even that will have error in image matching due to errors discussed previously (shadows, no change in gray tone, etc.) Note an engineering design DTM is ground only, and an image contains building, trees, etc. that need proper elevation correction. “Auto” DTM has this information while a ground engineering design DTM does not.

211. Mosaic Since each rectified or orthophoto now has a ground X,Y,Z associated with each photo x,y – by coordinate comparison it is simply to merge individual photos into one composite mosaic. Software attempts to use as much of the centers of photos as possible as this minimizes relief displacement. One mosaic thus can often be called one orthophoto for the entire job. In very large jobs multiple mosaics are created and called orthophotos simply because one file could become unmanageable due to file size. The Mr. Sid file format has become a standard in photogrammetry for digital images because it is smaller in file size than tiffs or bitmaps. Free software is available on the Internet to view Mr. Sid images (

212. Why are aerial digital cameras so great? Scanners for photogrammetric accuracy can cost from $50,000-$200,000. Time is involved in collecting film based photography, processing the film, then scanning the diapositives. Aerial digital cameras produce digital images and remove all film processing and scanning. No film distortions, including shrinkage and expansion, exist in image from a digital camera

213. Automation via Digital photogrammetry (1) A digital camera collects the data, and GPS-IMU is used in conjunction with the digital camera (2) The digital files are downloaded. GPS and IMU can be processed relative to raw data at existing CORS stations so no ground based GPS needs to exist. (3) Automated image matching occurs for both AT and Auto DTM (4) AT can occur if the GPS-IMU results is deemed not accurate enough. (5) Auto DTM enables DTM’s to automatically be built for the coverage area. Automated orthophoto production can occur, and the produced orthophotos can be automatically seamed together to produce one overall orthophoto mosaic which is copied to a DVD and delivered to the client.

214. Mosaic The best seam for merging photos is in grassy, water, or similarly paved areas where the same gray tone exists. Software can easily change gray tone or color hues along a seam line to make images from multiple photos match up better – it is almost impossible to find seams in modern digital mosaics!

215. The future A digital aerial camera solves all film based problems. A fast computer in the airplane will process the GPS-IMU near real time. The GPS-IMU will satisfy accuracy requirements without aerotriangulation. The fast computer in the airplane will perform auto-DTM. The fast computer in the airplane will create orthophotos and mosaic them creating seamless data. The complete orthophoto will be copied to a DVD and be deliverable when the plane lands. Vendors are saying the future is close to now!

216. Example aerial digital cameras Note design of digital aerial cameras creates a different dimension in x (flight line) vs. y in format. Most metric aerial digital cameras until recently had shorter focal lengths than the standard 6 in. of film based metric camera

217. Example aerial digital cameras RolleiMetric AIC Resolution 16-39 MegaPixel Image size 36x36mm or 38x48mm focal lengths 40, 50, 80, 120, or 150 mm shutter speed up to 1/1000 sec. from calibration 5440 pixels in x, 4080 pixels in y,

218. Example aerial digital cameras Intergraph/ Zeiss DMC II 6846x6096 pixels 42 Megapixel, 7.2 um pixel, Focal length 45 mm Pan camera is 17216x14656 250 Megapixel, 5.6 micron, focal length 112 mm

219. Example aerial digital cameras Intergraph/ Zeiss RMK D 45 mm focal length 7.2 um pixel sensor size 5760x6400 pixels (41.47x46.08 mm) 42 MegaPixel

220. Example aerial digital cameras NexVue (specksystems) 4906x3678,18 mpixel, 9 um, 50 mm. foc 6496x4872,31.6 mpixel, 6.8 um, 50 mm foc 7216x5412, 39 mpixel, 6.8 um, 50 mm focal length

221. Example aerial digital cameras Ultracampxp by Vexcel 11310 pixels in x, 17310 pixels in y, 195 Megapixel, 6 um CCD sensor, 100 mm focal length, advertises 1 um rms AT residuals UltracamL is 9735x6588 pixels Panchromatic and multi-spectral separate (9 pan and 4 color arrays)

222. Example aerial digital cameras Leica ADS40 2nd generation 12000 pixel wide swath Simultaneous panchromatic, color, color-infrared, all multi-spectral bands

223. Example aerial digital cameras Z/I Imaging DMC (Digital Mapping Camera) Frame sensors rigidly mounted 4 high resolution panchromatic camera heads and 4 multi-spectral heads Can handle diverse light and has Forward Motion Image compensation (FMC)

224. Example Digital Metric Cameras DiMAC – large format frame 10,500 pixel wide swath. 4 images simultaneously possible of true color and infrared Has FMC

225. Example aerial digital cameras Line scan cameras have to be used in combination with direct sensor orientation (combination of GPS with inertial system) to enable a correct geo-reference. They are imaging permanently the flown area. The sampling rate determines the possible object pixel size in flight direction. The ADS40 and the JAS150S have a maximal sampling rate of 800 lines/sec, the ADS80 is in the range of 1000 lines/sec while the 3DAS-1 has 250 up to 750 lines/sec. This limits the smallest object pixel size in flight direction to approximately 8cm for the low flight speed of 250km/h (Table 2).

226. Example aerial digital cameras Table 2

227. # photos/ flight line One part of price estimation is calculating the number of photos in a project Let’s assume 9x9 in. format, 6 in. focal length, 60% end lap (along the flight line) The 2nd photo edge must extend to the start of the project for stereo coverage The 2nd exposure must thus start 50% of 9 in. converted to ground from the project edge

228. # photos/ flight line Since 60% overlap is 40% advance, the 1st exposure will thus be 10% inwards from the project edge Flying is not an exact science so it makes more sense to take the first exposure at the project edge. Usually the pilot takes 1-4 photos before and after the project edge just to be sure of coverage Photos not needed are simply not processed

229. # photos/ flight line Starting 1st exposure on project edge, how many photos are required on a 3 mile flight line flown at 1200 ft. flying height? {3mi*(5280 ft/mi) / [0.4*9in*1200ft/6in]} +1 One is added because dividing computes # of intervals between exposures, so adding one “includes” the 1st exposure The answer is 23 photos

230. # photos/ flight line If the computation results in a decimal number such as 27.863 photos, one has to round up to 28 photos. This is because 27 photos would not provide the desired coverage. Note it was assumed 1st and last exposures will in a worst case situation be on the project exterior not 10% inwards.

231. # flight lines/ project Typical sidelap (across flight lines) is 15-30%. We will assume 20% for our computations. 20% sidelap is 80% advance between flight lines. A typical “safety” factor is to fly the first flight line inside the project exterior by the sidelap. Note mathematically the first flight line could be flown 50% inside the project exterior.

232. # flight lines / project Assume the previous project is 2.5 miles wide (usually the flight lines are in the longer direction of the project) and at least 20% sidelap is desired Assume the first and last flight lines must be within 20% format of the project exterior Total width is 5280*2.5-2(.2)(9)(1200/6) = 13200-720 = 12480

233. # flight lines / project Note the distance inside the project exterior is subtracted twice for both first and last flight lines # flight lines = 12480 ft./ (.8*9in*1200ft/6in)+1 # flight lines = 9.67 So ten flight lines are required unless you want to “move” the first and last flight lines inward more than 20%

234. # flight lines / project It is up to the photogrammetrist, due to rounding up, to stay at 20% sidelap and move exterior flight lines outward, to fly at more sidelap Try changing the flying height and recomputing # of photos/flight line and # of flight lines Fewer photos means less cost but higher flying height so less accuracy

235. Remote Sensing Any methodology that studies the characteristics of objects from a remote point. More specifically, it is the extraction of information from imagery obtained by various sensors in aircraft and satellites Satellite imagery allows one to monitor the entire earth on a regular basis

236. Remote Sensing Remote sensing systems “see” over a broader range than the human eye, but that range can be converted to image we can analyze and see. (1) different forms of film and filters (2) Multispectral scanners (MSS) (3) Radiometers (4) Side looking airborne radar (SLAR) (5) Passive microwave (among others)

237. Remote Sensing X-rays, visible light rays, and radio waves are energy variations of the electromagnetic spectrum. Energy is classified by wavelength Visible light has wavelength of .4-.7 microns Gamma rays, X-rays, and ultraviolet are smaller wavelengths than visible. Infrared, microwaves, and radio waves are larger wavelengths than visible.

238. Remote Sensing Photographic emulsions can be sensitive to non-visible light. As example, infrared can be captured and viewed on a photo. Infrared is used to study crop stress, or identification different flora types.

239. Remote Sensing Multispectral scanning (MSS) operates in 0.3-14 micron range. MSS is categorized by bands, which are converted into electronic signals represented by digits. Each pixel receives an appropriate digit. The digits can be used to identify imaged objects.

240. Remote Sensing Beware most non-photographic images are not perspective photos. Advanced remote sensing texts describe the geometry of non-photographic images and their subsequent mathematical processing. LIDAR, as previously described, is a form of remote sensing.

241. Remote Sensing Evolution of resolution of satellite remote sensing devices Note resolution is the size of a ground pixel. Accuracy of position will be larger. First generation Landsat – 80 meters Next – Landsat Thematic Mapper – 30 meters Landsat 7 – Enhanced Thematic Mapper Plus (1999) – 15 meters Spot (France) – 10 meters

242. Remote Sensing IKONOS from Space Imaging (1999) – 1st commercial system – 1 m – can view in stereo – individual houses, cars, and boats could be identified Geoeye-1 0.5 m resolution, proposed Geoeye-2 0.25 m resolution World-View-2 0.5 m resolution Proposed Carto-Sat-3 0.35 m resolution

243. Remote Sensing Surveyors are responsible for integration of remote sensing with other data types Policies in resource management, land use, and land development integrate remote sensing with GIS activities

  • Login