Digital sensors (T.Schenk § 7) AA 2008 -2009 1 DIGITAL PHOTOGRAMMETRIC SYSTEM SISTEMA FOTOGRAMMETRICO DIGITALE UNITA' DI UNITA' DI ACQUISIZIONE RESTITUZIONE DATA CAPTURE UNIT yields the digital image so that the right 2D fiducial coordinates is associated to each radiometric (discretized) value, passing through pixel coordinates, corrected for systematic errors (optical distorsions, calibration ….) RESTITUTION UNIT allows the operator to handle all the digital images, performing all photogrammetric operations (I.O., relative and absolute orientation, stereoplotting…..) AA 2008 -2009 2 DATA CAPTURE UNIT The digital images used in a photogrammetric process can be obtained in TWO WAYS : DIRECT • images directly taken in numeric form with digital cameras; INDIRECT • traditional photograms, digitized through suitable devices ACQUISITION Summary of photogram characteristics: 1. format: 23 cm x 23 cm (in satellites also 12” x 18 ”). The whole film roll is often scanned 2. very high geometric resolution and accuracy : ∼10 micron of pixel size and 2 micron of positioning accuracy for A.T. 3. high radiometric resolution: 10 or 12 bits (internal, output 8bit) AA 2008 -2009 3 DENSITOMETER It transforms images analogue to digital from It is an instrument built for INDIRECT image data capture made of: • a calibrated source of light; • a plane surface where the photogram is placed, with a small slit where the source of light is; • a sensor receiving the light after being passed through the film; • filtres to separate three colour layers (for colour or I.R. films); • analogue/digital converter to give digital output data; • a display to show the measure result (i.e. the local density); • a recording system. AA 2008 -2009 4 The light, with known and constant intensity, is attenuated through the film, the more where the film is darker. The light that arrives ton the sensor produces a current, proportional to the light intensity The unit area of a pixel is a square with variable side, according the desired geometric resolution, and of capability of the densitometer (over 5000dpi) AA 2008 -2009 5 SCANNER Two types of construction choices : different moving engine • Flatbed Scanner: the sensors move on the image area • Drum Scanner (a tamburo rotante ) : the image moves too, on a cylindric support. Flatbed scanners use solid state sensors Drum scanners use photo-multiplier (photodiode) Common characteristic: a sensor detects the light, coming from a calibrated source, that pass through a small region of the film. Drum scanners are difficult to calibrate because of cylindric support Flatbed scanners are more used AA 2008 -2009 6 FLATBED SCANNER According to resolution and accuracies, scanners can be classified as: Scanner Format cmxcm (UNI) Geom. resolution (DPI) Radiom. resolution (bit) Usage DTP 21x30 42x30 300-1200 8-36 Desktop publishing Photogram. 26x26 1200-4096 8-24 Photogram scanning Photogrammetric scanners, have very high geometric resolution and accuracy (typically 2-5 µm); There is often a sw for I.O. and fiducial marks collection The scanning time is usually long (from 15' to 45' for geometric resolution of 15 µm); disadvantages : high costs and complex sw. Costs: 500-5000€ for DTP, up to 150.000 € - 200.000 € for Photogrammetric. AA 2008 -2009 7 LH-SYSTEM DSW300 Basic Technology Geometric Resolution Geometric Accuracy Scanning Format CCD sensor Acquiring pixel dimension Optical resolution Dimensions weight Operatine system Software Storage formats Acquiring times Fixed CCD and moving photogram 0.5 µm < 2 µm 260 x 260 mm2 1024x1536, 2029x2044,2056x3072 9µm 33 lp/mm, 40 lp/mm, 100 lp/mm 1238 mm x 1003 mm x 1175 mm 150 kg PC WINDOWS Geometric and radiometric calibration, automatic inner orientation TIFF, JPEG 4 minutes in gray tones (12.5 µm) – 9.5 minutes in RGB INTERGRAPH PhotoScan TD AA 2008 -2009 8 DTP SCANNERS DeskTop Publishing scanners have not developed for photogrammetric applications been They have lower costs (500, 5000€) and widespread diffusion and their capabilities are always increasing DTP are usually flatbed scanners and use line sensor CCD (see later) Their geometric and radiometric resolutions are always increasing (nowaday at least 1200 dpi, up to 24 bit) The main drawback is the poor geometric accuracy, due mainly to mechanics, lens distorsions, and lack of calibration sw AA 2008 -2009 9 DeskTop Publishing scanners can be used for photograms scanning only after a careful calibration procedure, both geometric and radiometric This is done using a precision grid as in traditional stereoplotter. The grid is scanned, with the desired resolution , and the digital position of the crosses are compared with the calibrated ones. This comparison gives a calibration model to be applied to all photograms scanned. # 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 255 236 184 147 116 91 72 56 45 37 30 23 19 15 12 10 8 7 6 5 255 234 182 146 114 90 71 55 44 35 28 22 18 13 10 9 7 5 4 3 255 233 182 146 115 90 71 55 44 35 28 21 17 13 10 9 6 5 4 3 255 235 185 148 117 92 73 57 46 37 30 24 19 15 12 11 8 7 6 5 255 236 185 148 117 92 73 57 45 37 30 23 19 15 12 10 8 7 6 5 255 235 184 146 115 90 72 55 44 35 28 22 17 13 10 9 7 5 4 4 255 234 183 147 116 90 72 56 44 35 28 22 17 13 10 9 6 5 4 4 255 234 183 147 116 91 72 56 44 36 28 22 17 13 10 9 7 5 4 4 255 234 183 146 115 91 72 56 45 35 28 22 17 13 10 9 7 5 4 4 255 235 183 147 116 91 73 56 44 37 28 22 18 14 11 9 7 5 4 4 AA 2008 -2009 10 11 12 13 14 15 16 17 18 19 20 10 Three different pattern for the assembling of photosensitive sensors: • single sensor, acquires the image (often placed on a rotating drum), pixel by pixel, line by line; • line array of sensors, acquires the image line after line. Ex: PhotoScan PSi (Zeiss/Intergraph) has a line sensor with 2048 single sensors, with pixel size of 7.5 µm • matrix array of sensors (frame), acquires the image divided in regions. Single region are numerically connected using a grid of crosses with known coordinates as reference points. Linear sensors are more used because they show higher sensitivity, up to five times the frame sensors AA 2008 -2009 11 every scanner presents radiometric anomalies that produces local lack of uniformity, sometimes corrected with a local low-pass filtering Drum scanners have more geometric accuracies problems, due to the various moving parts. But : for the highest geometric resolution drum scanner are used AA 2008 -2009 12 PHOTO-SENSORS The role of the sensor is to ‘measure’ the film density Two type of sensors are mainly used: photomultiplier and solid state sensors Photomultipliers are based on external photoelectric effect: when the light hits the cathode, some electrons come out and can be attracted and captured from the anode, and a current can be measured. To increase sensitivity, the electron flux is amplified with an auxiliary cathode, therefore are called photomultiplier. They have very quick time response, and give the highest geometric resolutions, but cannot use linear or frame array. Semiconductors work with the internal photoelectric effect : Light photons that hit a semiconductor produces pairs of negative charges (free electrons) and positive charges (‘holes’) that can travel inside the semiconductor if a potential difference is applied, thus producing a detectable current. The photosensitive elements are coupled with electronic devices for the transferring and the measure of the current CCD (Charge Coupled Devices= dispositivi a carica accoppiata). AA 2008 -2009 CCD sensors have different sensitivity in the three color band R G and B, therefore there are problems in alignment of the 13 three color matrix, (lower sens.to blue light) LIGHT SYSTEM IN A SCANNER Direct light Diffuse light Direct light : a light amplifier projects the light over the whole image surface Advantages : - energetically cheap (lamps with low heat radiation are used, also with optical fibers) - high field of depth (direct light has a small aperture angle) - low sensitivity to small blurring effect BUT : some diffraction can be seen Diffuse light: it is obtained interposing a diffusing ground glass The best photogrammetric scanner make use of diffuse light, because it gives best image quality: on the image are not projected possible dust particles or scratches, Furthermore, diffuse light produces less noise compared with direct light. AA 2008 -2009 14 OTHER COMPONENTS Photo carrier. Usually is a modified version of a stereoplotter photo-carrier, therefore it has the same geometric accuracy : about 2 micron. Max scanning speed is around 10-20 mm/sec, and must be constant to have a uniform pixel size. Images are acquired in portions, called swath, with amplitudes given by sensor dimension (multiplied by the optical enlargement). To avoid overlapping or shift between subsequent swaths, the positioning must be very accurate adjacent swaths that are not in line, or overlap or separate, cause errors in digital images AA 2008 -2009 15 Different pixel size ….. Pixel size along the sensor line depends on sensor size and optical enlargement. Pixel size in scanning direction depends on scanning speed and integration interval. Furthermore, output pixel size (usually square) is the result of some resampling passage. Pixel size of the internal sensor between 10 and 15 micron Scanning pixel size : projection of the pixel on the film It is the direct (raw) output of the scanner, and depends on the optical system , the speed, and the integration time ‘Refined’ pixel size : final result of scanning and postprocessing sometime is the same of raw scanning pixel size AA 2008 -2009 16 Geometric resolution, sampling and CCD The sampling interval have not to be confused with the pixel size In an ideal sampling process, they should be identical FIGURA Ideal case in which pixels are clearly separated and never overlap over the same image portion In some instrument the sampling interval is much smaller than the sensor size, in this case pixels overlaps and blurring and smoothing effects appear. AA 2008 -2009 17 DIRECT ACQUISITION Digital Cameras The image plane contains a 2-dimensional field of sensors (frame array) The charged state of sensors (analogical signal) is read at given times. The analogical signals are converted into digital signals. The most used digital cameras use solid state sensors, usually CCD There are also cameras based on ‘photoconductor tubes’ “vidicons” (the name of most used device). First CCD have been invented in 1970, the first sensor had 96 pixels. In '80 sensor arrived to have 500x500 pixel element. Nowaday there exist sensors with more than 5000x5000 elements (4096x4096 are in a chip built by Ford-Aerospace, on board of espace telescope Hubble). AA 2008 -2009 18 Solid State Cameras They have spectral flexibility and geometrical resolution always increasing. One of the main defect was geometric resolution, the other is the reduced field of view Terrestrial digital cameras Frame sensors Aerial digital cameras Linear sensors (not only) The main advantage (compared with film cameras) consists in the prompt availability of the images, for processing and analysis, and this also allows the development of ‘real time photogrammetry’, for instance in mobile mapping vehicle or machine vision (time delay between image capture and analysis : 20 msec) AA 2008 -2009 19 Basic working principle The semiconductor chip is made of a big number of small light sensitive elements: each element (Sensor ELement SEL) usually rectangular, is only few µm wide, and can record the light intensity hitting its surface. The sensor is attached on the image plane, with elements (SEL) in linear or areal pattern It is usually protected by glass or micro-lenses, and can be fixed (conventional cameras) or can be moved in the focal plane, to increase resolution. Schematic diagram of the basic element of a CCD : a capacitor (condensatore) and a semiconductor (usually silicon). The metal electrodes are divided by a insulating substance (an oxide) AA 2008 -2009 20 When Electro-Magnetic Radiation (EMR) hits the element, photons with energy higher than the semiconductor energetic gap can be absorbed into the ‘depletion region’ (regione di svuotamento) creating pairs electron-hole. By applying a positive voltage to the electrode, electrons are forced to stay in the depletion region, while the holes (positive charges) move toward the ‘electric ground’ (fondo). The result is a charge collection at the opposite sides of the insulating region, and charges can be collected and measured The actual charge is proportional to the number of absorbed photons, therefore the total charge in each cell is proportional to radiation intensity and exposure time The band gap energy of silicon corresponds to photons with wavelength of 1,1 µm Higher energy photons (shorter wavelength) can be absorbed out of the depletion region; lower energy photons do not produce electron-holes pair. some photons do not produce current, the efficiency is less than 1 AA 2008 -2009 21 One frame = one pixel matrix monochromatic image Color (RGB) images ?? AA 2008 -2009 22 An alternative solution is represented by the new sensors FOVEON Three different layers of sensors, that work also as filters, detect independently the three primary colours. The final image is generated with information coming from all three layers AA 2008 -2009 23 Charge transfer A CCD is structured as a charge transfer device (CTD) A sequence of voltage pulses moves the charges from one sensor to the adjacent, mantaining the ordering. When the charge under one sensor reaches the end of the line, it is drained, i.e. collected and measured. The original position of cumulated charge, depends on the number of pulses used, and can be stored together with the charge information pixel position Furthermore : depending on the pulse phases (2, 3 or 4), the number of charged elements drained can be changed one charge measure is the sum (integration) of more elements charge Therefore the single image element, the PIXEL, consists of the area covered by the number of sensors drained. AA 2008 -2009 24 Charge transfer Different techniques have been developed for a fast and precise transfer of the charge According to the different organization of drainage for signal transfer CCD are classified in : • Interline Transfer (IT) • Frame Transfer (FT) • Linear arrays with bilinear readout • Time Delay and Integration transfer (TDI) AA 2008 -2009 25 CMOS Complementary Metal Oxide Semiconductor AA 2008 -2009 26 Time Delay and Integration transfer (TDI) This kind of charge transfer subsitutes the FMC device. Instead of applying a displacement to the image plane (like in aerial film camera) the image motion is compensated by a charge transfer. During the integration time, the charge is moved from one pixel (or colums) to the adjacent, where the integration continues. This is equivalent to the motion of the sensor during exposure time. AA 2008 -2009 27 DIGITAL AIRBORNE CAMERAS The ground projection of an array pixel is called IFOV (Istantaneous Field Of View). IFOV, integration time and charge transfer time define the acquisition geometry in terms of effective ground pixel dimensions if the charge transfer interval is very short the acquisition can be considered as continuous. BUT if speed or direction of flight change the images can be distorted With LINEAR ARRAYS the main problem is the Exterior Orientation determination: each line has to be oriented !! For instance 300 O.E. per second !! AA 2008 -2009 28 It is possible to mount many sensors for the simultaneous acquisition of images at different inclinations In this way it is possible to have different independent stripes that overlap, thus obtaining a sort of stereoscopic pair (images have to be ‘registered’ i.e. relatively oriented) AA 2008 -2009 29 DIGITAL AIRBORNE CAMERAS • Camera LH-System ADS40 • 3 linear sensors (similar to satellite ones) • Each linear image is oriented by an integrated IMU system (GPS/INS) Line • Simultaneous capture of stereoscopic images • Camera Z/I Imaging DMC • 4 linear CCD frame sensors, in central perspective • The four images are mosaiked into a single image Multi-cone AA 2008 -2009 30 Airborne Digital Scanner (ADS40) • DLR e LH-Systems • 1 lens, 3 directions of view • along-track stereo • airborne with GPS/INS • different line CCD configurations in focal plane AA 2008 -2009 31 In the focal plane three panchromatic lines and other multispectral lines are mounted, with one unique optics. ADS40 uses 8 parallel lines of sensors The three color lines (with 12000 pixels each) are optically overlapped during the flight by using mirrors, (near IR is displaced) Stereopairs are obtained taking stripes with three different angle of view: nadir, backward, forward AA 2008 -2009 32 mono-CCD multicamera. AA 2008 -2009 33 AA 2008 -2009 34 World-wide market distribution Airborne mapping cameras (status Nov 2005) AA 2008 -2009 35 World-wide market distribution Airborne mapping cameras (forecast 2010) AA 2008 -2009 36 SATELLITE PHOTOGRAMMETRY DIFFERENCES WITH AERIAL PHOTOGRAMMETRY - DIFFERENT SENSORS - DIFFERENT PROCESSING - DIFFERENT RESOLUTIONS NOW AERIAL AND SATELLITE IMAGES CAN BE USED FOR SIMILAR PURPOSES -> SIMILAR SENSORS AND GEOMETRY -> HIGH RESOLUTION AA 2008 -2009 37 IKONOS Agency:SpaceImaging (USA) Launch: December 1999 Orbit: sun-synchronous, quasi polar, height 680km Resolution: 1m PAN, 4m MS Stripe width: 11 km Rivisiting time: 2-3 days QUICKBIRD Agency:EarthWatch, Digital Globe (USA) Launch: October 2001 Orbit: sun-synchronous, quasi polar, height 600km Resolution : 0.8m PAN, 3.2m (MS) Stripe width : 22 km Rivisiting time: 1-5 days AA 2008 -2009 38 EROS Agency: ImageSat (Israele) Launch: 5 December 2000 Orbit: sun-synchronous, quasi polar, height 480km Resolution: 1.8m PAN (0.5 -0.9 m) Stripes width : 12.5 km Rivisiting time: 1 day AA 2008 -2009 39 HOW TO HAVE STEREOPAIRS ?? TWO SOLUTIONS : CROSS TRACK e ALONG TRACK CROSS TRACK CCD mirror filter 1 - A single line of CCD (one or more segments) combined with a rotating mirror Fl igh td ire cti on focusing system AA 2008 -2009 40 CROSS TRACK - 2 - A single track is acquired - Stereo pairs come from stripes acquired in two different orbits Flight direction Orbit 2 Nadir Orbit 1 AA 2008 -2009 41 ALONG TRACK 1 - Stereo images acquired in the same orbit , with small time delay (seconds) Orbit wi ng Time t3 v ie vie d wa r rd wa P ng wi Ba ck r Fo Nadir viewing Time t2 Time t1 P - Same geometry of AIRBORNE digital cameras - Optical system: ONE or MORE lenses AA 2008 -2009 42 ALONG TRACK - 2 - Sensori Multi Lines - CCD lines simultaneously acquire images in various directions (nadir, off-nadir) - CCD lines are usually parallel and orthogonal to flight direction Direzione di volo Direzione di volo Nadir rd Ba ck αF wa For wa rd αB AA 2008 -2009 43 ALONG TRACK - 3 - Possibility of rotating the view - syncron acquisition or asyncron Flight direction Costant angle of view during image acquisition Flight direction Variable angle of view during image acquisition AA 2008 -2009 44 ORIENTATION OF LINEAR CCD • Image is formed by a sequence of lines acquired independently Flight direction IMAGE FORMATION • FOR EACH LINE 6 E.O. parameters are UNKNOWN !!! Traditional orientation is not feasible AA 2008 -2009 45 Orientation = Geometrical correction of high resolution satellite imagery Rigorous models Non rigorous models Acquisition geometry reconstruction Co-linearity equations Effemerides Angular attitude Focal, FOV Atmospheric refraction Black box models = generalized models nondependent on acquisition geometry Polinomial function DLT Rational Function Model – RFM Neural Net AA 2008 -2009 46
© Copyright 2024 Paperzz