Monday, November 3, 2014

The Great SLS Debate

On Sunday, November 16 at 4PM Eastern Standard time, the globally popular online program The Space Show will host a live debate regarding the merits of NASA’s controversial rocket known as the Space Launch System, AKA SLS.  Arguing for SLS will be John Hunt (a former military aerospace professional), while the contrary position will be supported by Rick Boozer (Space Development Steering Committee member, astrophysicist and author of the book, The Plundering of NASA).  Dr. David Livingston will be the host and moderator of the debate.

Its supporters claim that SLS will allow NASA to return to the Moon and go to points beyond. Its detractors claim it will actually prevent the achievement of that goal (wasting many billions of dollars of taxpayer money in the process) and thus refer to SLS as “The Rocket to Nowhere”.  It is the latter faction’s contention that there are more modern, more economical, safer and easier to implement alternatives to SLS for American deep space travel.

Tune in to see which side makes the best case about how our nation’s future in space should be conducted.  A recording of the debate will be available for free download a couple of days after it airs.

Friday, September 26, 2014

Photometry with AIP4WIN: a Tutorial – Part 3
Flat-field preparation

By R.D. Boozer
As mentioned earlier, flat-field frames are created to remove the bad effects of any optical defects in the telescope/camera system.  The two most common methods of shooting flat-field frames are 1) shooting a twilight sky and 2) using what is called a light box (a simple artificially lit box that can be constructed by the user).  Both of these methods are described extensively in Berry and Burnell’s book along with instructions on how to implement them; therefore, there is no need for me to cover those details here.  Instead, I will cover what the observer is to do with the flat-field frames after they are shot.

Before discussion of the implementation of flat-field frames, it should be mentioned that there were several sets of supplied stellar images, with each set taken with a different optical filter.  Those were V, R and I filters which are centered on 550 nm, 650 mn, and 800 nm wavelengths respectively. (Warner 24)   Magnitudes measured with the V filter will roughly correlate to traditional visual magnitudes.  Magnitudes with the R or red filter are measured in the longest wavelengths visible to the human eye, whilst I filter magnitudes are obtained in near infrared light. Later within a reference framework, there will also be mention of U and B filters centered on a 365 nm wavelength and 440 nm wavelength respectively, with U corresponding to near ultraviolet and B being shorter wavelength visible light that appears primarily blue.  (Warner 23)  There may also be mention of a color index, which is the magnitude of a star measured in one filter subtracted from the same star’s magnitude in another filter. (Warner 29)  The reason for obtaining images in such varying wavebands will eventually be explained.

Any flat-field frames that are to be used on a stellar image must have been exposed through the same filter as the image to which it is to be applied.  Because most of the supplied stellar images were taken using the V filter, the creation of a master flat-field frame will be demonstrated using V filter raw flat-field frames.

Another point that needs to be made is that all flat-field frames should ideally be exposed with an integration time such that most of the pixels contain roughly half the saturation value for a pixel.  This level of exposure ensures that enough light has been absorbed to have a strong enough signal, but not close enough to saturation that the light response in the image is no longer linear. (Berry and Burnell 182)  Given this fact, I went about checking each of the raw flat-field frames to see if they met this important criterion before starting the calibration setup.  The explanation of the flat-field part of the calibration setup will be continued after the description of the half-saturation evaluation (that was done earlier) of the flat-field frames.

According to the instructions supplied for the assignment, the saturation level of the CCD sensor used is 65k, which for computer equipment such as a CCD chip is 216 or 65536.  Given this statement, I can attest the following facts from his previous career as a software engineer.  Since a reading of 0 is always considered to be the lowest value in the range, the actual value range for a pixel of the CCD chip would be 0 to 65k-1 or 0 to 65535 ADU (this is still 65k possibilities).  So nothing will register higher than 65535 ADU and thus this value would indicate absolute saturation.  The following screen capture image illustrates how each raw flat-field frame was checked to see that the pixels it contained had values somewhere in the vicinity of 65k divided by 2 or 32768 ADU.

First, the raw flat-field frame was loaded via the File menu as one would load any other image.  Once the image was loaded, the Pixel Tool option under the Measure menu was invoked.  The Rectangle from corner radio button is clicked so that the user can drag the mouse to define the area in the image that he/she wants to check, excluding the unexposed vertical black strips to either side of the actual exposed image.

Figure 10: Making sure a flat frame’s pixels are near half saturation.

The minimum value of 17845 would be one of the stray occasional darkest pixels that have below normal sensitivity and can essentially be ignored, especially since this is one of the things for which the flat-field frame was created to compensate.   What is important is the median ADU value of 35727, which indicates a value such that approximately half of the pixels in the image should have an exposure above that value and half below.  Considering that fact along with the indication that the maximum pixel ADU value in the image is 39262, it then appears that this is a fairly well exposed flat-field frame.  Remember, a value only roughly near the ideal of one half saturation is necessary; therefore, this flat-frame is adequate.  Indeed, when I checked every raw flat-field frame for every filter in this manner, all of them were exposed at a level adequately near the half saturation value.  Again, all of this was done before the Calibration Setup Tool was invoked.

Now continuing the discussion of the Calibration Setup where it was left off, the Flat tab is clicked and produces what is shown in the next illustration.

Figure 11: The default appearance of the Flat-field frame tab.

Clicking the Select Flat Frame(s) button begins the selection of the raw flat-field frames for the production of a master flat-field frame.  Again, the actual selection process is similar to the one followed during the selection of raw bias frames; therefore, that detail will not be shown.  All of the V filter flat-field frames were of 25-second exposure and, as shown earlier, this was a sufficient amount of integration time to fill the pixels to approximately half saturation.  After the raw flat-field frames are chosen, the Flat tab appears similar to thusly:

Figure 12: The raw flat-field frames have been selected.

It is normally considered optimal to shoot at least 16 raw flat-field frames to obtain the highest quality master flat-field frame.  (Berry and Burnell 182; AAVSO 3.4)   Only 13 V filter raw flat-field frames were supplied.  There could have been an equal number of what are called flat darks which might improve the final images.  These are raw flat-field frames where the twilight sky is used as the uniform light source and have the same integration time as the regular flat-field frames.  With these 32 flats (16 flats + 16 flat darks), typical master flat-field frames have a signal to noise ratio of around 600. (Berry and Burnell 182)  An SNR of 500 or better is needed to obtain 0.01 magnitudes accuracy.  (AAVSO 3.4)  But with the paucity of flats that were supplied, it will be good fortune if the SNR of the master flat is half of that.  However, even given only fairly transparent seeing conditions, Gliese 876 would be a special case where planetary transit photometry may not require such extremely precise magnitude resolutions, for reasons that will be explained later.

Had dark flats been supplied, the Subtract Dark Flat box would be checked and the Select Flat Dark(s) button clicked to allow the selecting of the raw flat darks.  Instead, the user goes directly to clicking the Process Flat Frame(s) button to make the software automatically create the master flat-field frame via averaging of the raw flat-field frames.  A result similar to what you see below presents itself after that action.

Figure 13:  The master flat-field frame has been created and may be saved.

Of course, the user may now click the Save as Master Flat button to make a permanent copy of the newly generated master flat-field frame.  The Applied Flatfield Correction box was automatically checked and indicates that any stellar image calibrated by AIP4WIN will have the master flat-field frame automatically applied to it.  Of course, if for some reason the user decides (for some reason) he/she does not want the master flat applied, the box may be unchecked.  But for photometry, you definitely want it applied.

Understanding how the master flat-field frame is applied to the image is important.  But before this is discussed, the reader may be wondering, “What purpose do the Median Combine and Normalize Median Combine radio buttons serve?”

Some observers contend that there is a way to produce a better flat-field frame than by using a uniform artificial light source and/or flat dark frames taken at twilight.  Instead of a series of exposures from the two aforementioned relatively uniform light sources, they take a number of exposures of different areas of the dark night sky whilst making sure that none of the exposures contain a bright object.  A median combine of those exposures is then done.  Since disparate parts of the sky are being merged, any particular stellar object in a frame will be removed by the median operation since it will not appear in other frames.  Proponents of this method say it gives a more uniformly illuminated master flat-field frame than traditional methods. (Brown 1)

According to AIP4WIN’s built-in help documentation, a normalized median combine is used when the only flat-field frames shot are dark flats taken at twilight.  In this case, the flat darks are scaled to produce a common average value and then median combined.

But once a master flat-field frame has been generated, how does AIP4WIN apply it to the image during calibration?  After the bias-removed dark frame has been subtracted from the stellar image, an operation is done on each pixel in the stellar image.  This operation consists of dividing the value of a stellar image pixel by a ratio that is equal to the ADU value contained in the corresponding pixel in the flat-field frame by the average pixel value of the central region of the flat field frame.  The reason why the average of only the central region is used rather than the average of the whole flat-field frame is because it is assumed that the values at the outer edges of the field are going to be consistently lower than central pixels due to vignetting and that vignetting is part of what is to be eliminated. (Berry and Burnell 189)

Why does this procedure work?  If the above-stated ratio were gotten from a perfectly flat frame, then that ratio of the average value to the value of a pixel would always be one.  However, because of vignetting, inhomogeneous pixel sensitivity, etc., a one value for any pixel is seldom the case.  In the instance of a pixel with low sensitivity or that is shaded by a dust particle on the camera’s optical window, then the pixel value in the master flat will be low.  Thus, the ratio for that pixel in the flat frame will be greater than one and the value of the corresponding pixel in the stellar image will be boosted upward to what it should be when it is multiplied by the ratio. (Berry and Burnell 190)

In the next instalment of this series of articles, I will reveal the visual appearance of the bias, dark and flat-field frames and how to use the software to apply them for calibrating an astronomical image.

Berry, Richard and James Burnell, Handbook of Astronomical Image Processing, (2006) Willmann-Bell, Inc., Richmond, Virginia, USA
Brown, Michael, Flat Fielding Dithered Data,  (1996)
Warner, Brian D., A Practical Guide to Light Curve Photometry and Analysis, (2006) Springer Science + Business Media, Inc, New York, New York, USA

Copyright 2014 R.D. Boozer

Friday, September 19, 2014

Major new endorsement for The Plundering of NASA: an Exposé

Received this email from O. Glenn Smith (former manager of Space Shuttle systems engineering at NASA's Johnson Spaceflight Center):

"Hi Rick,

I just finished reading your great book.  Good going!  We are really swimming upstream, perhaps until the next election.  If you have not seen it, attached is one of my recent articles in Space News.


For those interested in reading the article of which Glenn spoke, here is a link:

It is most gratifying when significant figures in the space community have complimentary things to say about my work.

Following are some of the other kudos received by me:

"I hope you continue as a voice against pork at NASA"
Lori Garver, (former Deputy Administrator of NASA) commenting to the author about The Plundering of NASA: an Exposé

"The Plundering of NASA offers an insightful analysis of the agency's struggle with external forces tending to distort a clear, long-term US vision for space exploration. In crisp and concise prose, it calls into question NASA's current "flight plan" for reaching its stated destination, and makes the case for using the game-changing technologies needed for a truly sustainable and robust human exploration of space. An enlightening read for those who are not in the field and food for thought for those who are. " 
Dr, Franklin Chang-Diaz, former Space Shuttle astronaut, CEO of Ad Astra Rocket Company and inventor of the VASIMR electric rocket drive

"The author does an excellent job of exposing how a few individuals in the legislative branch of our government are impeding the progress of our space program. This is most evident with the Space Launch System, a project to develop a heavy lift launch vehicle in the same class as the Saturn V that sent astronauts to the moon. It is also true to a lesser extent with the Orion spacecraft that is designed for human missions beyond earth orbit. Both of these projects are based on flawed designs (more on this later). Worse yet, these projects (especially the Space Launch System) are consuming such a large portion of NASA's budget that other vitally important work is not getting accomplished. Of course the book's author is not the only one who has been pointing out these things, but I was glad to see it laid out in detail in book form. I hope that members of congress read this book and that it helps influence legislative policy."
Gerald Black, 40 year veteran aerospace engineer who worked on the ascent engine of the Apollo lunar lander for NASA

"The Plundering of NASA: An Exposé by R.D. Boozer is the indispensable source for a wide range of information about the SLS controversy and the machinations within NASA, the Congress, aerospace companies and two Administrations that have left us in the terrible mess we are in today. It is the most comprehensive single source for both SLS-Orion program information and context. I have written several articles about the SLS issue and found the book very valuable and interesting."
John K Strickland, Jr.
member National Space Society Board of Directors
Advocate: Space Frontier Foundation

Thursday, September 18, 2014

Photometry with AIP4WIN: A Tutorial – Part 2
Bias and dark frames

By R.D. Boozer

In this second part of the tutorial, I cover the first process that I had to perform on data supplied by the University of Tasmania; that is, the creation of a scalable master dark frame.

Before the images were calibrated, an optimum bias frame, dark frame and flat-field frame needed to be generated.  AIP4WIN has a utility for performing these operations called the Calibration Setup Tool.  Under the Calibrate menu, the Setup option is picked to yield the dialogue box shown below.

Figure 1: The default appearance of the Calibration Setup Tool dialog box.

The dialogue box’s default setting is to follow the Basic calibration protocol in which the only calibration file produced is a master dark frame that is generated from multiple raw dark frames by either averaging the raw frames pixel by corresponding pixel or by taking the median of those pixels.  This method will not bring the background noise level down sufficiently to the degree needed for accurate stellar photometry.  A drop down list box can be used to invoke more adequate options as shown in the next illustration.

Figure 2: Selection of a calibration protocol.

Two other options are presented in the drop down list box.  The Standard protocol will automatically create a master dark frame from a series of raw dark frames and a master flat-field frame from a series of raw flat-field frames, but does not apply any bias frame compensation and thereby is not suitable for photometry.  The only option that is usable for photometric purposes is the Advanced protocol which allows everything.  The following figure shows the appearance of the dialogue box after the user chooses the advanced option.

Figure 3: Selecting the type of bias compensation.

There are four tabs offering different complementary functionality in the dialog box: The Bias frame tab, the Dark frame tab, the Flat-field frame tab, and the Defect frame tab.  Under the first tab, three bias related choices are offered: no bias subtraction at all, subtraction of a user defined number of ADUs from each pixel (only of use if no bias frames are available), or subtraction of either a raw or master bias frame.  Since precision photometry is to be done, the radio button for the third option marked Use Bias Frame is selected followed by clicking the Select Bias Frame(s) button.  At this point what is seen in the illustration after this paragraph shows up.

Figure 4: Selecting bias frames.

Since eleven bias frames were supplied, one master bias frame will be made from all of them.  To begin the process of making the master bias frame, the user highlights the filenames and clicks the Open button.  This action brings the user back to the Calibration Setup dialog box as it appears below.

Figure 5: Bias frames have been selected.

The reader will note that the dialogue box now indicates that 11 raw bias frames have been loaded.  At this point the user needs to decide how the raw bias frames are to be combined into a master bias frame.  If pixels are averaged with their corresponding pixels in the other frames, the readout noise typically decreases with the square root of the number of frames averaged. (Berry and Burnell 169-170)  Given this fact, averaging as many bias frames as possible is usually the best way to go.  One exceptional case is when the camera is being operated in an electrically noisy environment.  Under such circumstances median combining of the raw bias frames should be used because any large power spikes will manifest themselves in a master bias frame that was obtained from an averaging operation. (Berry and Burnell 170; AAVSO 3.2)

I did a visual check of the given raw bias frames that revealed no indication of power line spiking, so I decided that averaging would be used.  Clicking the Average Combine radio button causes ADU values of corresponding pixels in the frames to be automatically averaged.  Finally, the user clicks the Process Bias Frame(s) button to create the master bias frame and the view that follows is seen.

Figure 6: The master bias frame may now be saved.

As soon as the processing has been completed, the Save as Master Bias button becomes enabled so that the master bias frame can be saved as a single file.  After the master bias frame is saved, it can be used in the future instead of going through the bias frame combining process again.  Notice the check box marked Subtract Bias.  As long as that box is checked, AIP4WIN will automatically subtract the master bias frame from any dark frame or image when calibration occurs.  If the user decides at any point that he/she does not want automatic subtraction of the master bias frame to occur, the option may be unchecked.

Now the master dark frame is to be created.  Clicking the tab marked Dark will start this operation.

Figure 7: The default appearance of the Dark frame tab.

The creation of the master dark frame begins with clicking the Select Dark Frame(s) button. Since the selection of the raw dark frames that are to be combined is similar to the procedure followed for the earlier described selection of raw bias frames, this operation will not be pictorially illustrated.

Because a temperature-controlled camera was used, a master dark frame was created using raw frame files that were not necessarily shot near the same time as the research images, nor did they have the same integration time as the research images.  The standard technique (automatically done by the software) in this situation is to:
Get the number of ADU counts per second for each pixel in each frame by dividing the count of each pixel by the number of seconds of the frame integration.
Produce the master frame by averaging ADU counts per second per pixel (that were calculated in step 1) for every dark frame.

After those two steps have been executed by the software, the resultant master frame can then be scaled to calibrate any stellar image exposure by multiplying the dark frame pixel values by the image integration time.  (Walker 29).  With AIP4WIN, the above series of steps is chosen for automatic implementation when the user clicks the Automatic Dark Matching radio button.  The reader should not be misled by the Constant Dark Scaling radio button, because that option is only used when a user wants to use a raw dark frame as his master and manually inputs a scaling factor for it in the input box below that button.  Ignore it.

As was mentioned in Part 1 of this tutorial, raw dark frames were supplied that had 5, 6, 35 and 180 second integration times.  It should be mentioned that there seems to be some disagreement about what frames to combine.  One source I found says only raw dark frames that have a longer integration time than the stellar images should be used to make the scaling dark frame.  (Berry and Burnell 174-176)  Another source implies that all scaling dark frames should be used. (Walker 29)  Using techniques that will be described shortly, I calibrated some stellar images using each method and compared the signal-to-noise ratios of the stars in the calibrated images.  I decided to resolve the matter myself to my satisfaction by trying both methods and examining the final fully-processed images yielded from each of these methods.  Because I found that including all of the dark frames in the making of the scaling master frame yielded slightly improved SNRs (signal to noise ratios) compared to the other way, I chose that method.  The next figure shows what is seen after these files have been selected and the Automatic Dark Matching radio button was chosen.  At this point, the Process Dark Frame(s) radio button is clicked to indicate that a scalable master dark frame is desired.

Figure 8: Dark frames of varying integration times have been chosen.

Clicking the Process Dark Frame(s) button will start the process of automatically creating the scalable master dark frame via the aforementioned process.  The dialogue box will then look similar to what you see below.

Figure 9:  The newly created scalable dark frame can now be saved.

The Save Master Dark button has become enabled and should be used to save the scalable dark frame as a file for later use.  The Subtract Dark Frame check box was automatically checked to indicate that, during the current run of the AIP4WIN application, an image calibration would automatically scale the master dark frame and apply it to the image after the master bias frame has been applied.

Part 3 will cover the creation of the scalable flat field frame.


Berry, Richard and James Burnell, Handbook of Astronomical Image Processing, (2006) Willmann-Bell, Inc., Richmond, Virginia, USA

Walker, E. Norman, CCD Photometry, (2007)

Copyright 2014 R.D. Boozer