Most satellite images are shipped without locational information, or is located by sensor position, which usually is not precise enough for connection with other images or map data. To "geolocate" the image, we "rectify" it using one of several processes.
- Registration - colocating two images that may or may not be in proper coordinates.
- Georeferencing - puts the image data into geographic coordinates associated with a projection and datum. "Ground control points" are used to locate the same feature on map and image.
- Orthorectification - both georeferences the data and corrects for distortion due to terrain elevation (layover). It requires a DEM and information about the camera or sensor. The ground control points are 3D is this case.
The rectification should be done AFTER the classification or other image analysis, because the process usually degrades the quality of the original image by resampling or otherwise merge values in cells.
The process has three basic steps.
- Finding control points
- warping the image using a polynomial transformationthe polynomial "order" determine how much distortion can be admitted into the warping model
- a "linear" model allows only shifting in x and y, rotation, or skew in scale in x and y
- a "polynomial" or "2nd order" transformation allows the scale to change with distance in x and /or y, plus the above
- a "cubic" transformation allows the above plus localized distortion due to the lens (not common in satellite image)
The Root Mean Square Error of the warping model iswhere N is the number of points and y is the location you picked and x is the location predicted for that point using the other points you picked.The RMSE expresses the degree of error in all of the points. In general is it best to use the LOWEST order transformation that you can get away with that produces an RMSE <1 pixel. - resampling the image to create new pixel in the geolocated reference space
- nearest neighbor selects the closest cell to the correct geographic location (computationally easy, but can result in shifts in linear features and edges)
- bilinear interpolates the value for the DN in the 4 cells closest to the ouput cell in x and y (extreme data are lost, but image is "smoothed")
- cubic convolution uses an array of 4 cells in both x and y fit to a polynomial (tougher for your CPU)
0 commentaires