Hi All,Gordon/Trevor:
Could you Gents perhaps put together a synopsis thread (or tutorial style) thread of how you take the image and then what steps you go through in processing?
Many of us are not familiar with this type of photography (RGB, etc) and speaking personally my concerns are not so much the equipment, but about how you figure/calculate how long to take each image (and how many of each) and then how you process it all into that incredible image that we all get to see here on the site.
This all comes from seeing many images on the site here with descriptions such as:
Quote:
This is made up of 14x4 minutes Red, 9x4 minutes Green and 15x4 minutes Blue,.......
What made you decide to only take 9 x 4 of the Green, or 14 of the Red?
Is Green a less prominent color and not as critical, or is it a more prominent color and therefore to make a nice image it takes less exposures?
It would help many of our members to better understand how those images get put together. The mechanical stuff, although technical, is fairly straight forward. The "Black Magic" of it all seems to be in the processing..
Mike
Following a request from Mike "DizzyGazer" about the "Black Magic" of RGB imaging and its processing I thought I would put something here so that all who are interested in imaging can have a look. I will try and answer all of Mike's questions in the process.
Firstly RGB imaging differes from imaging with a DSLR or OSC (One Shot Colour camera) in that a series of seperate exposures have to be taken for each colour channel ie RGB or RED, Green and Blue. The end result will be a colour image which would be the same RGB result from a DSLR or OSC camera. So why take RGB images with a monochrome camera and seperate colour (RGB or emission line [Ha, OIII, SII] filters)?
Well, the answer to this is that monochrome cameras are more sensitive and they allow more flexibility ie you can have more or less exposures for each filter and in the casew of emission line, you have to have a monochrome camera. The downside of this method of imaging is that the overall exposure time is much longer as you have to take seperate exposures through each filter so for example if you took a colour picture of M42 through a OSC camera and the total exposure time was 2 hours (eg 24x5 minutes) then the same picture would take approximately 3 time longer with a monochrome camera and RGB filters as you would need 2 hours through each filter.
Regarding exposure times and how many to take, this is dependant on a number of things, ie the subject, light pollution, sky conditions, pixel size, focal ratio, etc. Regarding the subject, there are some subjects such as M31 and M42 which have very bright cores and fainter outer areas in which case several different length exposures will have to be taken for each filter eg 1x30sec, 10x60sec, 10x120sec and 10x240sec and then the resulting exposures combined togeher using layer masks so as not to burn out the core. You will of course have to do this for each filter. The general rule is to always go for as long an exposure as possible without sky glow becoming a problem in your pictures or without the stars becoming over saturated. For where I live this would be a maximum of about 6 minutes when using RGB filters as above that sky glow and light pollution become a real problem, you should then go for as many subframes as possible, this is where you the expression 10x5 minutes which means 10 exposures of 5 minutes for each filter. The reason for doing many exposures is that when the 4 subframes are combined together the signal/noise ratio becomes greater meaning that you get a smoother image with less noise which is easier to process. Going on to one of Mike's questions about my image of NGC 5907 where I used a different number of subframes for each filter, this was not intentional as the intention was to do 15x4 minutes for each filter, the problem was that a combination of satellite trails, aeroplane lights and rapidly approaching daylight ruined some of the subframes hence the different numbers involved.
Once all the subframes are collected then the fun starts, ie calibration and processing. I am going to stick my neck out here as I don't callibrate my images in the same way as most people, generally you take dark, bias and flat field frames and use those to callibrate tyhe subframes before you combine them and process them. This is generally done as follows:
1. Dark frames - same exposure time as the subframes but with the lens covered and usually 10 at least and then median combined, the master dark frame then being subtracted from each raw subframe, thie result is that any noise, hot pixels etc are removed from the subframes prior to processing
2. Flat field frames are more complicated to do and involve pointing the telescope at an evenly illuminated light source such as the twilight sky or early morning daylight and taking an exposure, tis is where it gets complicated as the expsosure time is dependant on the camera, filter etc. Again about 10 flat field frames are taken for each filter and ideally, dark frames of the same length as the flat field frames have to be taken as well as bias frames and subtracted from the flat field frames before they are combined and divided into the raw subframes, flat field frames take a picture of the imaging train so tyhey show things such a dust donuts, vignetting etc and correct these out of the raw frames
3. Bias frames are very short (almost zero length) exposures which show the bias of the chip and then subtract this out of the raw frames.
Now you can see why in some instances the taking and processing of images can be time consuming.
Here is where I differ from most people in that I don't use flat field frames or bias frames and when it comes to dark frame subtraction I use the simple auto dark subtraction facility in Maxim DL where, when I am staring my image run the software tells me to cover the telescope, it then takes a single dark frame of the same exposure as my raw frames, it then tells me to uncover the telescope and away it goes and as each subframe is downloaded to the computer the dark frame is automatically subtracted, although this is not the ideal method, it works for me but you have to make sure your optics are clean.
Once all the exposures are done I then combine all of the Red subframes using either median combine or sigma clip in Maxim DL, then I stretch the image using a log stretch with maximum pixel and 16 bit settings in Maxim DL, this strecth allows the fainter areas to show while holding back on the brighter areas, this is then saved as a 16 bit tiff file using screen stretch so the image is saved exactly as seen in Maxim DL. The same is done for the Green and Blue channels. I then transfer over to Photoshop CS2 and open all three (red, green and blue) tiff files and combine them as and RGB file, now I have my first semblance of a colour image. Without going into too much detail here (as it would amaount to many pages), my workflow is as follows:
1. Adjust the black point using levels
2. Curves to bring out the detail in the subject
PLEASE NOTE - the real important thing to remeber here is with levels is not to allow the left pointer to encroach into the black "peak" (you can leave the other two pointers alone at this time) and with curves the curve must not be allowed to touch any part of the box otherwise you will "clip" the data (lose it completely)
3. Readjust the black point with levels
4. Repeat steps 1 to 3
5. You will know when you have gone too far here as the image will start to look burnt out in the lighter areas and there will be noise evident (picture becoming grainy)
6. Because of the light pollution where I live I frequently get gradients in my images so I use Gradient Xterminator (a plugin available for Photoshop) to take care of this.
7. I then move over to another plugin "Noels tools" and may use local contrast enhancement to increase the contrast in the main subject and also increase star colour t help bring the colour of the stars out a bit better
8. Colour balance to bring the histograms for each channel into line with each other to get a better colour balance in the image
9. Filter - unsharp mask to sharpen up the detail in the image.
10. I also use noise reduction in Noels tools if the picture has started to become abit grainy.
11. Finally I zoom in to at least 100% and use the clone stamp tool to tidy up the image by removin any hot pixels or gamma ray bursts, etc.
Save the resulting image as maximum quality JPEG for posting.
This may sound very complicated but to give you an example as to the time involved, my image of NGC5907 in close up that I took with the Starlight Xpress H9 and Takahashi BRC-250 (the image in question in Mikes original request) from stacking the original dark subtracted raw subfrmes in Maxim DL to the end product took between 1.5 and 2 hours.
Although my imaging and processing techniques are not ideal the suit me because of the time constraints and other responsibilities that I have such as taking my beloved dogs for walks, housework, work, etc.
Just out of interest here are images of a single dark subtracted Red, Green and Blue subframe before stacking and processing and also the end result after stacking all of the subframes for each channel, colour combining them and processing
I hope this has been of some use
Best wishes
Gordon
![]()