It is currently Tue, 19-09-17, 19:03 GMT

All times are UTC




Post new topic Reply to topic  [ 10 posts ] 
Author Message
PostPosted: Mon, 22-07-13, 14:06 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
Hi all,
I posted some at the start of this project at A Windows Story thread. viewtopic.php?p=10343#p10343
I decided that this was a bad approach and came up with a different plan.
I wrote a couple of programs that use the quality and state bands from the MOD09 files. One masks clouds, deep/moderate ocean, and high or moderate aerosol quantity. The other masks all pixels with less than highest quality.
The output from these programs are 8 bit files that are read by another program that combines the 2 and creates a mask.
Then bands 1, 3, and 4 are combined into a rgb image using the mask.
I then use gdalwarp to project the from sinusoidal to geographic wgs84.
The biggest problem is after masking, a lot of files are needed from a single area to fill in all missing data.
I used files from 6 different 8 day composites in the following example to try and mosaic Hawaii and there is still missing data.
Two files are involved. A file from grid h03v06 and a file from h03v07. The h stands for horizontal position and v for vertical. Each file is approximately 50 to 60 mb.
That will probably make it impossible to do the tropics or other cloudy areas because of :shock: .
I'll probably have to use a BMNG map as a base.
Here a couple of the images produced and the result after adding data to the base. There are 4 more.The base started with a file equal to all zeros.

Attachment:
File comment: Intermediate file
500m-rgb2.jpg
500m-rgb2.jpg [ 26.61 KiB | Viewed 3121 times ]


Attachment:
File comment: Intermediate file
500m-rgb3.jpg
500m-rgb3.jpg [ 28.24 KiB | Viewed 3121 times ]


Attachment:
File comment: Result after processing 6 files
500m-rgb.jpg
500m-rgb.jpg [ 30.16 KiB | Viewed 3121 times ]


The real goal of this project is to create a 250 meter resolution map. The green and blue channels are 500 meter. Only band 1, the red channel, is 250. If I read the 500 meter bands 1, 3, 4 and see which is greater, band 1 or 3 | band 1 or 4, then subtract the lesser from the greater, will it be better than simply scaling the green and blue up to 250 meter? (Scale up from 2400x2400 to 4800x4800.) This is rather easy but.........
This is what I plan to try but I'm not sure if it would produce a better result.

Code:
if band 1 is greater than band 3, then w = band1-500m - band3-500m, else x = band3-500m - band1-500m
if band 1 is greater than band 4, then y = band1-500m - band 4-500m  else z = band4-500m - band1-500m
read band1-250m
output pixel1 = band1-250m[1], band3-500m[1], band4-500m[1]
output pixel2 = band1-250m[2], band3-500m[2] = band3-500m[1] + w or band3-500m[2] = band3-500m[1] - x, band4-500m[2] = band4-500m[1] + y or  band4-500m[1] - z
.........


cartrite


Top
 Profile  
 
PostPosted: Mon, 22-07-13, 17:34 GMT 
Offline
Site Admin
User avatar

Joined: Fri, 31-08-07, 7:01 GMT
Posts: 4495
Location: Hamburg, Germany
Hi Steve,

why this is pretty interesting, I can hardly see anything on your images. They are very dark! Note that my 24" monitor is hardware calibrated with a high-quality tool...

Fridger


Top
 Profile  
 
PostPosted: Mon, 22-07-13, 20:18 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
They are dark, aren't they? :wink: The background is zero. I need the zero as a no data flag. I am still experimenting with scaling pixel values. The original files are signed 16 bit. The values are from -100 to 16000. The images of the world I posted in the other thread were scaled from -100 - 16000 to 0 - 255. I can't imagine what a negative value for reflectance could actually mean. The images above were scaled from -100 8000 to 0 255. They still look dark?

I wrote some code to scale up the 500m green and blue bands. It still looks better when I just scale those bands up though. Some noise gets inserted when I try to infer the missing values.
cartrite


Top
 Profile  
 
PostPosted: Tue, 23-07-13, 21:03 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
I went back to the drawing board. I noticed a change in the quality after I changed the way I scaled up the green and blue channels from 500m to 250m. Long ago, I used HDFLook and that program did it that way. I'm not sure how though. I never looked into how it worked. When I started this recently, I used the Modis Re projection Tool to extract the bands I wanted and scale up by setting the pixel size from 463 meters to 231 meters. Only band1, the band for the red channel was already at 231. I also scaled the 16 bit files to 8 bit with a -100 - 16000 to 0 - 255 as I stated above. This was the result.


Attachment:
old-way.jpg
old-way.jpg [ 117.73 KiB | Viewed 3089 times ]



I just changed my approach to doubling the size of the images by a little program. At first, I changed the files to 8 bit gray maps with the stretch mentioned above but that created a lot of noise in the blue channel for some reason. So I changed the program to read and calculate the green and blue channels from 16 bit files and then created the 8 bit files with a -100 - 8000 to 0 - 255. Here is the result. I see a pretty good improvement. Not only is it lighter but also looks a lot sharper. I can see the difference even in the thumbnails.


Attachment:
File comment: Both of these images are at 250 meter resolution. They were cropped at 1600x800 to fit the forum size limit. I didn't scale them down.
new-way.jpg
new-way.jpg [ 263.96 KiB | Viewed 3089 times ]


The way I calculated the green and blue channels at 250m was like this.
I used band1 at 250 meter, bands 1, 3, 4 at 500 meter as input files.
The program reads all the 500 meter files a line at a time. one loop. It reads the band1 file at 250 meter 2 lines a loop and writes two lines a loop.
I subtract or add band1 and band3 or band4 depending on which is greater so there are no negative numbers. I the add or subtract this value from the values in band1 at 250m. So if band1 is greater than band3, x = band1-500m - band3-500m, then for band3-250m, line1, pixel1 = pixel1 - x or + x depending on the how x was generated. The same goes for line1 pixel2, line2 pixel1, and line2 pixel2. It seems to produce a better result than changing the size from 2400x2400 to 4800x4800 with a re-sampling method. I think the red channel, band1 sensor produces 250 meter resolution. So the values for band1 in the 500m files are probably averaged down to produce the values. It was derived from the same 4 pixels used to construct the green and blue channels. The green, band4, and the blue, band3 only have 500m sensors. I just wonder if there is a way to get a more accurate value. I thought about comparing the 4 250m band1 pixels to see which is highest or lowest and come with a weight to use in calculating the green and blue channels.
cartrite :D


Top
 Profile  
 
PostPosted: Tue, 23-07-13, 23:22 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
Here are a couple of images of Massachusetts. One is raw, unfiltered and the other went through the mask created for this file. Both are using green and blue channels calculated like above. I guess you can tel which is which. I noticed that areas are being masked out that aren't that bad. Probably smog. high or moderate aerosol quantity. I currently only use pixels with low aerosol quantity. No deep or moderate oceans either.

Attachment:
nw-unfiltered.jpg
nw-unfiltered.jpg [ 148.36 KiB | Viewed 3081 times ]


Attachment:
nw-filtered.jpg
nw-filtered.jpg [ 135.63 KiB | Viewed 3081 times ]


cartrite


Top
 Profile  
 
PostPosted: Wed, 31-07-13, 17:48 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
t00fri wrote:
Hi Steve,

why this is pretty interesting, I can hardly see anything on your images. They are very dark! Note that my 24" monitor is hardware calibrated with a high-quality tool...

Fridger

I've been exploring a different way of proceeding. These images you commented on were a test to see if I can add missing pixels to an existing mosaic. What I found was that this will cause issues because some areas have persistent cloud cover. Since the pixels will come from a variety of days and viewing angles, this will undoubtedly cause problems. For islands like Hawaii, I can hopefully find a time period when cloud cover is minimal. For other areas, like the tropic or other areas with a lot of cloudiness, I'm not sure what to do yet.

There is however a way of blending the pixels from different view points or sun angles. BRDF which stands for the Bidirectional Reflectance Distribution Function. This should enable me to adjust the entire mosaic's reflectance to a sun zenith a high noon and a low viewing angle. But there is a catch. Although the code to do this is not that hard to understand, it is dependent on 3 parameters. There is a product that is available at 500m resolution called MCD43A4 which is a 16 day composite that adjusts the reflectance value to common view geometry at the local solar noon zenith angle. This not only brightens the mosaic but also removes many of the shadows making it a better candidate for normal maps. The MCD43A1 files provide the 3 parameters mentioned above but they are intended for only that data set. There is no mention of how they went about obtaining these parameters except that they used the daily files for that 16 day period to produce the parameters.

What I need to find out is how to create parameters for the 8 day mosaic and the many daily files that will fill the missing areas of the 8 day mosaic. So far, I'm pretty sure that I need the EVI or NDVI, vegetation indexes. I can get the EVI from bands 1,2, and 3 at 500m or the NDVI from bands 1 and 2 from the 250m files.
I'm also pretty sure that topography, and optical density of the atmosphere plays a role in this "describing the scene" but .....
For the atmosphere, there is a correction algorithm I can use so I can use a constant for good conditions. I can also use 90m SRTM3 files for the topographic weight. But what to do with that, who knows. Right now I'm guessing that it may adjust the solar zenith, viewing zenith or both? not sure.

All the papers I've read so far seem to have been written pre launch of the Modis satellites. They only discuss theory. I haven't found anything on how to get the parameters. Only what to do with them after.........
The site that gives the most info for these MODIS BRDF and Albedo files is Boston University. http://www-modis.bu.edu/brdf/userguide/intro.html
The site contains links to code that uses the parameters. The references section has the names of the papers I've read. I goggled them.

cartrite


Top
 Profile  
 
PostPosted: Tue, 06-08-13, 12:56 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
I've looked into this BDRF and found that there is no way I can actually do it. I've read a lot on theory, but nothing but a few sentences on the actual method. One paper I read seemed to confirm that Leaf Area Index or LAI is not really totally related to NDVI or EVI. During parts of the life cycle of a particular leaf, it will be related, other times not. So I looked at the actual data they put out with MCD43A4 files and although its a very good image as far as compensating for different lighting conditions and view angles, it does not have the sharpness of actual reflectance values. It's really quite amazing though when you think of what is actually being done. Creating a realistic image from a BDRF model.

So I'm left with the question: Is there a way to correct /equalize pixels from different days / view angles / sun positions / orbit tracks? What I have to figure this out is the lat/long coordinates of the pixel, sun position via solar_zenith file, the view angle via view_zenith file, and the orbit track via a relative azimuth file. I think the biggest problem is the orbit track. If the spacecraft is heading north, it tends to be behind the sun and heading south, well. This brings up forward and backward scattering. This is what I need to compensate for. Is there a way? These images show what I'm dealing with.

Attachment:
File comment: This the original data and shows what I'm trying to correct.
500m.jpg
500m.jpg [ 209.57 KiB | Viewed 2984 times ]


Attachment:
File comment: This is my best attempt to correct the above image. The file name has the constants used when the relative azimuth was a negative angle or greater than 90 degrees. 1.025 and 1.110.
1025-1110-500m.jpg
1025-1110-500m.jpg [ 325.26 KiB | Viewed 2984 times ]


So far I've tried using just the relative azimuth file to lighten or darken the image. Problem with that is the data. For some reason, the edge of the original data swath has pixels from the opposite track. So if Terra or Aqua was heading north, they use pixels from another track heading south on the edge. This becomes quite apparent when using the cosine from the relative azimuth angles to add or subtract from the reflectance value. So I've played around with constants to add to darker areas with limited success. Can't quite get it though.

cartrite


Top
 Profile  
 
PostPosted: Wed, 23-10-13, 11:08 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
I found out something about a month ago or so that some may find interesting. The 500 m Blue Marble files are probably not really 500 meter resolution. The Modis Aqua and Terra satellites take images at approximately 466 meters per pixel at Nadir and the resolution quickly drops off as the scan goes off Nadir. I've been processing 8 day composites that only print pixels within 8 degrees of Nadir and at least 75% of the images are blank. Even with data from both satellites, the images have many blank areas. This even happens over many years using the same 8 day files. Julian day 225 for example. This leads me to conclude much of the Blue Marble data is either not actually 500 meter resolution or it is made up somehow thru some fancy mathematics.
cartrite


Top
 Profile  
 
PostPosted: Thu, 24-10-13, 2:21 GMT 
Offline
User avatar

Joined: Tue, 04-09-07, 21:55 GMT
Posts: 766
Location: N 42.38846 W 83.45456
that dose not sound at all surprising

i was working on that data and some of it did look a bit upscaled
there are numerous areas that look to be 6 or 8 bit color in 24 bit image


Top
 Profile  
 
PostPosted: Thu, 24-10-13, 10:51 GMT 
Offline
User avatar

Joined: Thu, 25-10-07, 15:20 GMT
Posts: 991
Location: NE PA, USA
John Van Vliet wrote:
there are numerous areas that look to be 6 or 8 bit color in 24 bit image

I've noticed this too in some areas. But in my case, I think it may have to do with gdal_translate's method of scaling or the scaling factors I'm using. Some of the detail may be getting lost when scaling from 16 bit to 8 bit for rgb construction.

As for resolution, the file for view zenith angle in the MOD09A1/MYD09A1 files, is what I'm using to screen off Nadir pixels. I've seen that screening angles greater than 10 degrees is a pretty good compromise. Anything lower and too many pixels are screened out. "Sharpness loss" is minimal but there is still a little color mismatch in areas of the mosaics. This is probably due to limitations in the atmospheric correction programs which is run on 8 day composites. As the off Nadir angle increases, so does the amount of atmosphere between the camera and the pixel. So I still have to equalize somehow.
cartrite


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 10 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group