Getting Results: BYU-RET Week 4

M67 through a narrow-band hydrogen-alpha filter from my .fits files

M67 through a narrow-band hydrogen-alpha filter from my .fits files

With my ability to use the IRAF and DAOphot software improving, I am finally ready to analyze the data I’m getting out. This was my fourth week at BYU; almost halfway done and just getting to where I feel competent enough to do some decent analysis. The learning curve has been steep.

The Learning Curve of Science:

There is a lengthy learning curve for those that wish to become scientists, and it is this long apprenticeship that discourages too many promising, bright students from entering the profession. Usually one has to achieve an undergraduate degree that involves taking hard classes (differential calculus and quantum mechanics, for example) and having precious little time for anything else. Then there are the masters and doctoral degrees, in which the prospective scientist becomes something like a journeyman – able to do her or his own work under supervision, until granted a PhD, which is the license to practice science. Most scientific specialities require additional experience, so many PhDs go on to post-doc research before finally achieving the level of independence and expertise that will command the respect of their peers.

Adding it all up, that’s about 8-10 years of post-high school education and training. Who wouldn’t be discouraged? It takes a single-minded dedication and commitment that’s hard to maintain (and hard to afford). I have thought about getting a PhD myself, but it seems pretty daunting. I’d have to retake some college classes (especially calculus) and my brain is not as supple as it used to be. I have a family to support, which would be hard to do as a teaching assistant or with a graduate student fellowship. But I also want to do official research on my ideas about science education, and no one will take me seriously until I add a few more titles after my name, no matter how many blog posts I write.

We live in a society that is totally dependent on technology (if you don’t believe this, try living without any electronic technology for a day and see how easy it is. And by the way, that includes driving your car, since cars have computers running the fuel injection system). A vast majority of the population uses technology without understanding how it works or how it is made. They couldn’t recreate it if their lives depended on it. So we live in a technocracy; that is, those who understand and control the technology are those with the real power. Just look at how quickly Congress caved when they tried to pass SOPA in 2012. All it took was Google and Wikipedia protesting for one day, and Congress completely backed off.

To get a .txt file into Excel, you must tell it which row the data starts on, in this case Row 45.

To get a .txt file into Excel, you must tell it which row the data starts on, in this case Row 45.

So here is the crux of the problem: we desperately need more scientists and engineers, but the long process required to train them is unappealing to most high school students. It’s not that they’re not bright enough. They simply don’t see that the rewards are worth the cost. As teachers we aren’t doing a good enough job showing them how profoundly rewarding a life in science can be. Perhaps if science teachers were themselves scientists, they might pass on the excitement of discovery. Better yet, if students could participate in real science as early as high school or even middle school, they might catch the vision of what they could become. That is the purpose of the Research Experiences for Undergraduates (and Teachers) program that I’m part of here at BYU this summer. So that I can tell my students it’s all worth it, and if I can do it, so can they.

This week it all began to become worth it for me as I saw my work yielding results. But before I got to that point, I had to overcome one more hurdle.

The Header of the original .als file, which has 45 rows before the actual data starts.

The Header of the original .als file, which has 45 rows before the actual data starts.

Getting the Data into Excel:

The end result of the lengthy DAOphot procedure was to produce a list of stars, their X and Y coordinates in the .fits file, and their magnitudes adjusted for the seeing conditions and correcting for saturated or overlapping stars. It came out as an .als file. Somehow, in order to compare the results, I had to get it into a spreadsheet.

The second step to get .txt files into Excel is to set the column breaks with tab markers.

The second step to get .txt files into Excel is to set the column breaks with tab markers.

Microsoft Excel can bring in text files as data if numbers are separated by commas or by spaces or tabs. First, I double clicked on the .als file which opened it up in MS Word. I re-saved it as a .txt file from Word, then opened up Excel and chose “File-Open” from within the program. Excel then navigates you through the process of conversion. I had to tell it what row the data began on (most files have headers or column labels). In this case, the actual data begins on Row 45. Then I had to set tab markers for the breaks between the data, making sure to leave enough room so that all the numbers for each field would fit inside the tabs (for example, the star numbers started in single digits, but by the end of the file were in the hundreds, so I had to leave room for at least three digits in that column). Once the tabs were in the right places, the data imported into a raw Excel spreadsheet. But is still needed quite a bit of cleaning up.

What the raw data looks like once it is in Excel. I had to delete the right two columns, then sort the data by Star Number and delete the interlaced rows.

What the raw data looks like once it is in Excel. I had to delete the right two columns, then sort the data by Star Number and delete the interlaced rows.

Cleaning Up the Data:

In the case of .als files, the data came in with about ten fields per record, which would not all fit on one line, so that it wrapped around to a second line for each star record. This had the effect of making two rows for each record, but I only needed the first row. Fortunately, the second row started with a blank cell in each case, so it was a simple matter of selecting all the data and sorting it by the first column (star number), then deleting all the second rows which were now at the bottom of the file. I also deleted two columns of data at the right side that I didn’t need. This left the following fields: Star Number, X-position, Y-position, Magnitude, and Error.

Some stars are too faint to process, so gaps are left in the number sequence. To accurately compare the same stars across filters, the star number must be lined up and the gaps filled with blank rows.

Some stars are too faint to process, so gaps are left in the number sequence. To accurately compare the same stars across filters, the star number must be lined up and the gaps filled with blank rows.

Once final problem had to be fixed: the process of doing photometry with DAOphot identifies a list of stars but some are too faint or too close to the edge of the frame for accurate results, and are rejected from the final calculations. The are saved out as a separate “reject” file. In my spreadsheet, they were shown as gaps in the star numbers. Since I would be comparing the same stars through different filters and at different times of the night, I had to be able to compare a one star with the same star in each field, and that meant filling in the gaps. I scrolled down, looking for discontinuities in the numbers comparing the spreadsheet row number with the star number. When a gap was found, I inserted a new row and filled in the missing number.

First Results: Magnitude Versus Error

The first frames I used DAOphot on were four frames of M67 taken on April 1, 2012. I chose this because it was the first folder on my data drive, but it wound up being a good choice because this is a well-studied open cluster that is quite old, about four billion years. I did two frames taken with a narrow Hydrogen Alpha filter and two frames done with a wide-band Hydrogen Alpha filter.

M67 Magnitudes vs. Error for three fields using a narrow-band H-alpha filter. Low magnitude stars (brighter) are saturated. High magnitude stars are too dim for accurate measurement. Middle magnitude stars with high errors could be something else entirely . . .

M67 Magnitudes vs. Error for three fields using a narrow-band H-alpha filter. Low magnitude stars (brighter) are saturated. High magnitude stars are too dim for accurate measurement. Middle magnitude stars with high errors could be something else entirely . . .

Consulting with Dr. Hintz, he suggested I check to see how good my data was by comparing the star magnitudes with the error. This would give me an idea of at what magnitude the errors became too great, where the stars were too dim to measure accurately. I sorted the data by magnitudes, then created a chart comparing the magnitudes with the errors. The result was the chart shown here. I also did the same comparison with the other frames for the night.

There was an interesting pattern to the data: the very lowest magnitude stars (the brightest ones) had fairly high error, probably because they were too saturated or covered too many pixels for the point spread function to measure their magnitudes accurately. But once the first 5-6 brightest stars were charted, the rest fell into a nice curve that rose gradually for about seven magnitudes before curving more steeply upward and becoming jumbled at the higher magnituds, where the stars were too dim for accurate measurement.

A Detour into Variables:

Not all of the stars fit on this nice curve, however. Some stars had consistently high errors in all fields, which I have shown with the circled dots in the chart. I thought there might be something interesting about these stars, that they might be variable stars, for example, and decided to pursue this further by identifying which stars they were in the .fits file and comparing them with known variables in M67.

I mapped the locations of the stars from the previous chart that had high errors and compared them to known variable stars in M67. There was no correspondence. I probably only discovered some bad pixels in the CCD sensor.

I mapped the locations of the stars from the previous chart that had high errors and compared them to known variable stars in M67. There was no correspondence. I probably only discovered some bad pixels in the CCD sensor.

This detour took me a couple of days to work through. I figured out which stars they were from the spreadsheet (they had the highest errors), then using the X and Y coordinates to determine the exact pixel location in the .fits file, which I had loaded into Adobe Photoshop. I made marks at those locations and drew circles around them and labeled them with the star numbers from the .als file.

I then looked up M67 in SIMBAD, the online astronomic database, and found its list of stars. By their names, I found which ones were variable (V*xxx), then marked them with red circles and names in my evolving Photoshop file. There was no correspondence between the two sets of circles, although some of the yellow circles did enclose actual stars. My conclusion, after this little detour, was that I had actually discovered some bad pixels in the CCD sensor. Perhaps, time permitting, I will look at the two stars I did identify and compare their magnitudes over several days to see is they are actually variables. Or this might be a good project for one of my students this fall.

Even though the results were largely negative, at least my Magnitude vs. Error chart did conform to what Dr. Hintz had drawn for me as the likely shape of the curve. This tells me that my photometry measurements are good and I am finally getting some results after over three weeks of preparation. I can now start to ask questions and pull the answers out of the data.

Posted in Uncategorized | Leave a comment

Counting Stars: BYU-RET Week 3

Computer terminal with IRAF and DS9 software running.

Computer terminal with IRAF and DS9 software running.

For my first two weeks at BYU, I have essentially been in background research mode between preparing my Prospectus and learning how to use IRAF. I hope to eventually work through the process of applying the calibration frames (zeros, darks, and flats) to reduce an image for photometry, but for now the data I am using has already been processed. For this third week, I began to actually analyze the images and pull useful data from them using a software package in IRAF called DAOphot. It was developed by Peter Stetson of the Dominion Astrophysical Observatory in Victoria, British Columbia, Canada.

Coordinates file for NGC 663 in IRAF and DS9.

Coordinates file for NGC 663 in IRAF and DS9.

The purpose of DAOphot is to do photometry (measuring the relative brightness of stars) in crowded clusters. If one is looking an individual stars in a sparse field, then regular aperture photometry works well enough. But stars in a crowded field such as a young open cluster will be hard to separate from each other. Their light curves will overlap. Some stars may be so bright in an image that the CCD sensors become saturated. DAOphot has the ability to separate out the different stars, repair their light curves using a point spread function, and interpolate the results.

Moving through the levels of IRAF - first NOAO, then Digiphot, then DAOphot.

Moving through the levels of IRAF – first NOAO, then Digiphot, then DAOphot.

Again, an analogy might help. Seven years ago my students and I filmed and edited a 2-hour documentary on the history of AM radio in Utah. We used a panel format to interview 25 current and former DJs about what it was like working at Utah’s stations. We used whatever microphones and cameras we could scrounge, and we had well over 20 hours of footage by the time the interviews were done. Then came the fun part – editing it all down to two hours. We also soon realized that we hadn’t done a great job of placing the microphones – some of the audio was too loud and the waveforms all had plateaus on top, meaning the sound had maxed out or saturated the microphones.

We were fortunate to have Mike Wizland help us on the project. He’s an expert on audio restoration and teaches at Utah Valley University. He has designed algorithms (as compared with Al Gore Rhythms) that will reconstruct the missing top of the waveform, as well as separate out overlapping speech, where two DJs were talking at once. They call him The Wiz for good reason.

DAOphot does much the same for stars. It digitally reconstructs the light curves. But it requires setting up quite a few parameters to make the point spread function work and extract the stellar magnitudes. Here’s how it works:

Using the IMEXAM command to determine parameters such as FWHM in NGC663.

Using the IMEXAM command to determine parameters such as FWHM in NGC663.

Step 1: Determining Parameters

To get the right results, a series of parameters must be determined and set in DAOphot. I first open up IRAF and DS9, then load in the desired frame of the object I’m looking at, such as NGC 663 or NGC 659. Once that’s ready, I use the IMEXAM command in DAOphot to measure four numbers which will set up the point spread function. To explain, I have to talk about one of the big problems with astronomy, namely light scattering.

All the stars, even the close ones, are so far away that to all extents and purposes they should appear as perfect dimensionless points to our eyes. However, the light from these points is scattered as it passes through interstellar dust and our own atmosphere. If the star is near ecliptic, zodiacal dust (dust in our own solar system) will scatter it even more. Since blue light gets scattered more easily than red (which is why the sky is blue, by the way), the light from distant objects becomes more reddish.

We can correct for the reddening, but the scattering spreads our nice one-dimensional perfect points into a smeared out three-dimensional Gaussian curve. If the stars are overlapping, so does the curve. How focused the curve is (or how bad the scattering) will change from night to night depending on seeing conditions. So we have to load in an image from each night and figure out how good the conditions were.

Finding the High Good Datum and the Point Spread Function Radius in DAOphot.

Finding the High Good Datum and the Point Spread Function Radius in DAOphot.

In IMEXAM a black circle appears in DS9. You start by measuring how spread out the light is by hovering over the exact center of 10 or so stars and pressing the “A” key. A series of numbers appears in IRAF. The column labeled “Enclosed” provides the FWHM, or Full-Width at Half Maximum. This refers to the spread of the Gaussian curve, or how wide the light is spread out, at the point that is half of the brightest value at the center of the star (the half maximum). After doing a sampling of stars, both bright and normal, you take the average of the FWHM measures.

Next, you hover over ten or so areas of background, where there are no stars as far as you can tell. This area should be perfectly black but usually is not. You press the “M” key, and another set of numbers appears. This is the standard deviation of the background, called sigma, and represents the error or divergence from pure black. Again, you take the average of ten or so areas.

Light curve for a saturated star - the top of the curve is a plateau. The High Good Datum is just below the plateau.

Light curve for a saturated star – the top of the curve is a plateau. The High Good Datum is just below the plateau.

Third, you hover over ten stars again and press the “R” key. This will pop up a graph showing a light curve for that star. Two numbers must be written down: the High Good Datum Point and the PSF radius. The HGD is the height of the curve on the vertical axis, in photon counts, and is usually in the 20-60,000 range. If the star is saturated, the curve will appear flattened and spread out on top, so the HGD is the last point vertically that is providing good data. The PSF radius is the horizontal axis where the light curve flattens out to the background. It’s all a signal-to-noise problem: how far out do you look for stray photons from the star? As far as you can still see them without blending into the basic background noise.

These four values can be used to determine remaining parameters, such as the fitting radius (about 1.4 times the FWHM) and the threshold sharpness (about four times sigma). The header of the .fits file should contain two other values required: the read noise and gain, which is read from the CCD sensor itself.

Setting the parameters for the Datapars function using the EPAR command.

Setting the parameters for the Datapars function using the EPAR command.

Step 2: Setting Parameters

Once you have determined the correct parameter settings from your .fits image file, the next step is to set them in DAOphot. In IRAF, to set parameters one must use the “epar” command (for “edit parameters”). Then you type in a series of commands such as “fitskypars” or “datapars.” You use the arrow keys to scroll down and type in the new numbers. To get out of one screen back to the daophot prompt, you type in “:wq”.

Step 3: Running DAOphot

Once the parameters are entered, you will run through a series of steps to set up and run the photometry analysis using the point spread function, which you must do in the correct order. These include the following:

Coordinate file created by the DAOfind command. This one is for NGC752, which is older and sparser than NGC663 or 659.

Coordinate file created by the DAOfind command. This one is for NGC752, which is older and sparser than NGC663 or 659.

A – DAOFind: This command asks you to enter the .fits file to analyze, then walks through the parameters (you can set them here individually as well) and ends by creating a file with the coordinates of every star detected in the image. Depending on the threshold setting, you can get hundreds of stars in a densely packed open cluster. It saves automatically as a .coo file. If you are doing more than one frame for that night, you will want to use just this one original coordinates file for all the frames, but checking to make sure the stars stay lined up. It is also a good idea to load the .coo file into the DS9 frame to make sure the stars are selected properly. They will have small green circles around them. You must use the “Region-load” tabs and navigate to your file, selecting the “all files” option and loading it in as an “xy” and “physical” file.

B – Phot: This command loads the .coo file you just saved and does the initial uncorrected photometry. It asks you to load the appropriate .fits file, walks through the parameters again, and outputs a .mag (magnitude) file.

The PSTselect function, which selects a sampling of 25 stars to determine the point spread function.

The PSTselect function, which selects a sampling of 25 stars to determine the point spread function.

C – PSTselect: This command selects 25 stars (by default) as representative of the image to determine the profile for the point spread function. It loads the .fits file, uses the .coo and .mag files already saved, and outputs a list of 25 stars as a .pst file. If you have it set to Automatic, it will just list the stars. If you set it to Interactive, a 3-D Gaussian curve of each star pops up and allows you to accept or reject the star. If it has a humpback or shoulder, it is overlapping another star and should be rejected. But it had issues when I tried Interactive mode – it kept giving error messages later on – so we decided to simply accept the automatic setting.

D – PSF: This command does the actual point spread function. Using the previous files, it takes the 25 sampled stars and applies the same algorithm to all of the stars in the .coo file. We had the type of PSF algorithm set to automatic, so it ran through the parameters, then used several analysis functions such as Moffat25 or Penny2, then picked the one with the best fit and output a .psf file. It takes a bit longer to output these curve fits, so wait for it! It also does several passes through the data. As it does, it reconstructs the curves of saturated or overlapping stars.

E – Group: This command groups the data from the previous step’s passes and prepares it for the final corrected photometry. It outputs a .psg file.

The final data table of star numbers, positions (x and y), corrected magnitudes, and errors in the ALLSTAR function. This saves a .als file that can be imported into a spreadsheet.

The final data table of star numbers, positions (x and y), corrected magnitudes, and errors in the ALLSTAR function. This saves a .als file that can be imported into a spreadsheet.

F – Allstar: This is used if you’re working with large numbers of stars (as I am) whereas Nstar is used for smaller samples. Again you load in the .fits file, walk through the parameters, and after the HGM is returned, it spits out a lot of data, with star number, position (x and y in the .fits image), magnitude, error, etc. It saves it as an .als file. This is the data you will use for analysis. It also ouputs an .arj file, which are the rejects, stars that were too dim or too close to the edges to analyze. The function integrates the volume under the corrected star curves for each star, counts all the photons inside, and compares them to get a list of apparent magnitudes.

 

Whew! What a process. I went off of the DAOphot manual and student notes, including a notebook left in the computer lab as a reference by the TAs who help students work through this process for the Physics 329 class. But the person who wrote the notebook has very small, densely packed handwriting. Other notes are sparse, more a list of steps without explanation. It took me several days to work through this the first time, using data that had been acquired back in January, 2012, for M67. I decided to start here because it is a well-studied cluster, and quite old, so it should show a variety of stellar processes going on. I felt quite a sense of accomplishment to finally get it to work and spit out the .als file. I did four frames altogether – two with a narrow band Hydrogen alpha filer and two with a wide band Hydrogen alpha filter. It was Friday afternoon when I finished. After three weeks, I finally had data to work with.

Now I have to get that data into a spreadsheet, clean it up, and start my analyses. I have 714 stars to work with, so it will be a large spreadsheet. I’ve also decided to prepare a proper manual with screen captures that a novice high school student could use to successfully navigate DAOphot. I don’t want anyone else to have to learn it from scratch!

Posted in Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

Mastering the Dark Arts: BYU-RET Week 2

NGC663 (Photographed by Hunter Wilson). This is one of my targets for analysis.

NGC663 (Photographed by Hunter Wilson). This is one of my targets for analysis.

With my prospectus completed and approved, I was ready to begin my actual research project at Brigham Young University. I am here on a Research Experiences for Teachers (RET) program funded through the National Science Foundation (Grant # PHY1157078). I’m working with Dr. Eric Hintz to study high-mass x-ray binaries, among other things, in several open star clusters. In my last post I outlined this background research. Not all of it will be relevant to my final analysis, but that’s something you don’t know when you start out.

During my second week I moved to the next step, which was to learn the software and processes I would need to successfully analyze the star data. Wait a second, you say, why am I jumping right to analysis when I haven’t collected any data yet? What we have is a science Catch 22. You can’t know how to best collect data until you know how that data will be analyzed, but you can’t analyze the data until you’ve collected it. The answer is to learn how to analyze data with another data set that someone else has already collected before you try to collect your onw. That way you can check to make sure you know what you’re doing, then apply your process knowledge to planning your research methodology.

The Uintah and Ouray Reservation. The outlined area is the original boundary of the reservation. The dark red areas are the sections actually controlled by the Ute Tribal government. Fort Duchesne is the government center for the reservation.

The Uintah and Ouray Reservation. The outlined area is the original boundary of the reservation. The dark red areas are the sections actually controlled by the Ute Tribal government. Fort Duchesne is the government center for the reservation.

Perhaps a personal story will illustrate what I mean. When I was in my undergraduate program at BYU, I was studying psychology with a minor in political science. I wanted to do some independent research even then, and got some other students involved with me. We arranged to work with a professor in the political science department to get some independent poly sci credit. We proposed to study the attitudes of members of the Ute Tribe on the Uintah and Ouray Reservation toward tribal self-government (one of us was a tribal member). We created a questionnaire and selected respondents at random from tribal rolls. One person, for example, was actually in jail in Fort Duchesne at the time and I had to go in and interview him. We compiled all the data, did statistical tests, wrote up the report, and got our credit without too much trouble.

To get rides out to Fort Duchesne, we also worked with the Multicultural Education Department at BYU. They had another project going on at the same time. They created an extensive questionnaire that compared white students with Native American students in the elementary, middle, and high schools in the area. There is an extremely high drop out rate among Native American students in the area once they reach high school, yet both groups are evenly matched through most of elementary school. The questionnaire asked their attitudes toward education, their sense of self-confidence, their support systems, etc. We gave the questionnaire to hundreds of students (with the full cooperation of the school district). I helped to administer the study on one of my trips out there.

A Ute warrior on horseback.

A Ute warrior on horseback.

When we collected all the questionnaires, they formed a stack literally four feet tall. And it was then we realized we had a big problem. The students that had designed the questionnaire had never considered how the data would be recorded and analyzed. It would have taken a small army of flunkies to record the data and put it into a computer program. And this was in the early 1980s, before spreadsheets were readily available. Lotus 1-2-3 hadn’t been invented yet, let alone Excel. So by the end of the semester, no data reduction had been done and the questionnaires sat in a pile in a corner of our professor’s office. To this date, I don’t know if the study was ever completed.

Moral of the story: When dealing with a mountain of data (such as looking at hundreds of stars in an open cluster over several nights with different filters), it’s essential to know what the data will be like and how to manage it all before collecting it in the first place. Every NASA space probe mission has to plan its data pipeline carefully, including how the instruments on board will store and transmit the data back to Earth, how that data will be collected and recorded and archived here, and how it will be reduced and analyzed. Then and only then do you start designing the instruments. You’ve got to know the end from the beginning.

The only type of data you have to work with in astronomy is light. It can be measured directly, filtered, ran through a spectrometer, looked at across the entire EM spectrum, and compared over time. I’m amazed at how much we can learn just from the light coming from a star. Much of what we do is to measure the intensity of the light at its various wavelengths. This is called photometry, literally “measuring light.”

IRAF and DS9 showing a Coordinate file for NGC663.

IRAF and DS9 showing a Coordinate file for NGC663.

The software for photometry (and many other things) used by most astronomers is called IRAF (Image Reduction and Analysis Facility) developed by the National Optical Astronomy Observatory (NOAO). It is powerful and can do both image reduction and analysis. I’ll talk about the reduction end in this post and the analysis end in the next. It is also essentially open source, so astronomers can program their own add ons and tweaks. But it is a pain to use, because it was first developed back in the days before GUI operating systems and to this day still uses a command line interface. So the learning curve is steep and painful. I must at least partially master it before I can attempt to do my own astronomic research.

Friedrich Wilhelm August Argelander, whose meticulous research led to the Bonner Durchmusterung, the standard catalog of northern stars for many years.

Friedrich Wilhelm August Argelander, whose meticulous research led to the Bonner Durchmusterung, the standard catalog of northern stars for many years.

Data Reduction:

Whenever you collect data for a science project, the data has to be converted into some format that makes sense. This could be as easy as creating a Likert scale in a questionnaire and recording the numbers chosen. But it makes a difference if the scale has limited, discrete choices (such as only being able to select 1, 2, 3, 4, or 5) or if it has a continuous scale (where people could choose 3.7, for example). It takes experience to know how to set this all up. Once the questionnaires are finished, the data must be recorded or entered into a program such as MS Excel where analysis and comparisons can be made. But can you imagine having to measure and type in all the parameters for a star, including its right ascension and declination, its magnitude, its stellar class, etc. and having an image field of hundreds of stars for each observation and each filter with several runs per night over many nights? They used to do it that way, such as the Bonner Durchmusterung (Bonn Star Catalog) that surveyed hundreds of thousands of stars in the northern hemisphere in the 1850s without the use of photography. Now we use IRAF.

The actual Bonn Star Catalog (Bonner Durchmusterung).

The actual Bonn Star Catalog (Bonner Durchmusterung).

But to get IRAF solutions to have any meaning, the data must be filtered before it can be analyzed. Any photograph of the sky done recently uses electronic sensors called CCDs (Charge Coupled Devices) which were first invented for astronomy and the space program but now are found in every cell phone and digital camera. They act as a grid of sensors or photon traps: as a photon from a star hits a sensor pixel, it knocks an electron off of the silicon and stores it in a register where it can be read out as a digital number. By reading all the numbers stored in all the pixels, a grid of digital data is built up representing the brightness of the image for each location. We call this a bitmap or a raster. Since I teach computer graphics, I’m very familiar with this aspect of astronomy and photography in general.

Now every CCD has some biases that affect the accuracy of the pixel data. First, when you read out the data, not every electron is successfully pulled out of the photon traps. Some get stuck, and you have to account for them and subtract these trapped electrons from your final image. Second, the electronics of the camera creates a background hum of noise that must also be subtracted out. Now these effects are not very important when taking a regular photograph, where there is so much light or signal compared to this background noise. But in astronomy, where you leave the shutter open for minutes or even hours (days in the case of the Hubble Deep Field), and you try to trap every photon that hits the sensor, then these effects are very noticeable. Finally, the individual sensor pixels in the CCD do not have the same sensitivity. One pixel may trap 90% of all photons that hit it, whereas another (especially those around the edges) may only trap 60%. You have to zero out this sensitivity bias as well.

An analogy would be to clean up a photograph taken indoors under dim fluorescence lighting. You’ve got to improve the photo’s brightness and adjust the color bias away from yellow toward blue. Likewise, astronomers have to remove the biases caused by trapped electrons, system electronics noise, and sensor sensitivity. The cleaned up images have been “reduced” and are ready for analysis.

The Master Zeros, Darks, and Flats:

I know it sounds like something out of a Dungeons and Dragons game, but these are terms familiar to any professional or aspiring amateur astronomer. To get rid of the biases, calibration frames must be recorded and averaged and digitally subtracted from the original raw images. Anything that remains is the result of “seeing” (visibility and air conditions).

A Zero Frame (frame taken at zero time) in DS9. It represents the electrons still trapped in the CCD.

A Zero Frame (frame taken at zero time) in DS9. It represents the electrons still trapped in the CCD.

To get rid of trapped electrons, the camera on the telescope collects an image with the cap on at zero time – basically instantaneously. Since it should be completely black, any numbers above zero that show up are electrons trapped in the pixels. To get rid of electronics hum (called dark current), images are taken with the cap on at 60 or so seconds. When compared with the Zeros, any electrons showing up were built up by the surrounding electronics. In both cases, ten or so images are taken and the results averaged to get a master zero and a master dark which are applied to all the images taken on a particular night.

The Master Dark frame. This represents the electronic noise of the CCD system. Or a great character in Dungeons and Dragons . . .

The Master Dark frame. This represents the electronic noise of the CCD system. Or a great character in Dungeons and Dragons . . .

To get rid of sensitivity biases, astronomers take a series of ten or so images with a neutral sky. Some take images using a flat gray paper equally illuminated taped to a wall. Or they point the telescope at the zenith at twilight to get a flat field. Since all the pixels should have the same number, any differences are divided out from this flat value. This should make a nice even image across the entire field of view of the telescope, provided there aren’t any high level cirrus clouds or other “seeing” problems.

The Flat Frame: Taken of a neutral gray background or at the zenith at twilight, it represents the sensitivity bias of the CCD sensors. Notice it is less sensitive (darker) around the edges and corners.

The Flat Frame: Taken of a neutral gray background or at the zenith at twilight, it represents the sensitivity bias of the CCD sensors. Notice it is less sensitive (darker) around the edges and corners.

IRAF is a command line program but it works in tandem with DS9, another package developed by the Smithsonian Astronomical Observatory (SAO). If you know the names of your individual files, IRAF will load them one at a time, create the masters, then apply them to the .fits or .imh files in DS9 and save out a final reduced image for each frame taken for a given night or filter.

All of the data I will use for the first few weeks will already be reduced, so I won’t go through this process for a while. I researched the process during this second week so I could understand it and get a better feel for what astronomers do. Now what I learned as a SOFIA Airborne Astronomy Ambassador makes more sense. The scientists on board talked frequently about data reduction and the “data pipeline.” For infrared astronomy the process is even worse because IR is measuring heat, and the heat of the sensor, the telescope, and the air all interfere with the image to create terrible noise. And that’s not even counting vibrational jitter. That’s why the scope on SOFIA chops and nods; it is doing much of the heat noise reduction right in the telescope. The CCD/spectrograph biases have to be processed out later, and they are still working out the final bugs as SOFIA reaches full operational readiness.

The same field of stars in DS9 before and after data reduction. With the noise and biases removed, the field has the same background darkness throughout and the data is much cleaner.

The same field of stars in DS9 before and after data reduction. With the noise and biases removed, the field has the same background darkness throughout and the data is much cleaner.

The data we will use for NITARP is already reduced and digitized in the IPAC databases. I am really appreciating that for the first time. All we will have to do is analyze the data, which will be difficult enough.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Prospectus: BYU-RET Week 1

Statue of Brigham Young on BYU campus. Local legend claims that if you run past him quickly, he will do the funky chicken dance.

Statue of Brigham Young on BYU campus. Local legend claims that if you run past him quickly, he will do the funky chicken dance.

For the next ten or so posts, I will report on my experiences doing astronomical research at Brigham Young University this summer (2014). As I mentioned in a previous post, I have been selected to participate in a National Science Foundation program called Research Experiences for Teachers (RET). I have several objectives:

1. To do some actual original research in astronomy, where I make observations, reduce the data, and analyze it into something that could be worthy of a poster at a science conference (such as AAS next January). In other words, I want to be an actual astronomer for the summer and learn how it’s done through first hand experience.
2. To translate my experience back to my classroom, and actually teach my students how to do authentic astronomical research themselves. With the grant that comes with this program, I hope to purchase a decent telescope and camera and learn to do real astrophotography and photometery.

The Eyring Science Center at BYU. The new planetarium dome is seen here. I work on the fourth floor.

The Eyring Science Center at BYU. The new planetarium dome is seen here. I work on the fourth floor.

3. To pass on what I’ve learned, how I’ve learned it, and the entire experience to you who read this blog. I will utilize this knowledge in other potential future projects as well (maybe a book or a video – who knows? At least it will help me with the SOFIA video I’m working on). At least I want this to exemplify how one goes about doing real science.
4. To compliment my experiences with NITARP and provide relevant background knowledge to help me train the students who will be going with me to Caltech at the end of July.

Y Mount and the Lee Library Atrium at BYU

Y Mount and the Lee Library Atrium at BYU

I’ll take this approximately one week at a time rather than reporting daily like a diary. I don’t want this to sound like, “Today I did this, then this, then that.” I want to focus more on the why and the how instead of the what. Eventually I will write up all of the what, all of the small steps and details for the use of my students and others, but for now let’s just get the feel and the big picture of what I’m doing.

The First Day:

We started with a general meeting and breakfast on Monday, June 9. I didn’t know anything beyond the date; the precise time and place where not given in the e-mail. Communication prior to this experience was minimal, and I approached the whole thing with a bit of uncertainty and concern, because I didn’t know what to expect or what would be expected of me. But not really knowing what you’re doing is a common feeling in science. We’re all walking into the unknown. So I had confidence I’d work it all out.

The new Joseph Fielding Smith Building and the SPencer W. Kimball Tower. I spent much of my undergraduate time attending classes in the SWKT, which was the new building back then.

The new Joseph Fielding Smith Building and the SPencer W. Kimball Tower. I spent much of my undergraduate time attending classes in the SWKT, which was the new building back then.

I discovered there are three teachers in this program. Both of the others are new, with one having one year of classroom experience and the other just beginning. They both went through Duane Merrill’s program for science teacher training here at BYU. There is also a group of about 20 undergraduate students who are also doing research, called the Research Experiences for Undergraduates program (REU). Everyone else seemed to know more of what was going on than I did. But we probably all felt that way.

We introduced ourselves, and Dr. Steven Turley, who is over the program, told us more of what we would be doing as a group. We need to write up a prospectus of our research topic and plan by Friday, then during our fifth week we will give an interim presentation to the group on our progress. Then we will give a final report the last week as well as write up a paper for review. All of this must be done in cooperation with our mentor teachers. There will be group activities such as hikes and Tuesday mini-classes to attend.

A computer workstation in on the astronomy floor, with my notes and laptop. The sign at left says "Astronomy (Tyan Wen Shwe)"  n Chinese.

A computer workstation in on the astronomy floor, with my notes and laptop. The sign at left says “Astronomy (Tyan Wen Shwe)” n Chinese.

We also toured the Eyring Science Center and had a general tour of BYU campus, which started at the Alumni House and took us in golf carts around campus. It was a strange feeling being back here again full-time. I haven’t been a student here since 1986, although I’ve visited campus many times to do research in the library or attend cultural events. Now I’ll be an actual Adjunct Research Faculty member, and get a coveted “A” Parking Permit and faculty ID, if only for 10 weeks.

I spent a total of six years as a student here, and touring campus brought back a flood of memories – of classes I took, of dates I’d been on, of experiences both wonderful and terrible. Of course I wondered what has become of all the people I used to know when I was here – where are they now? Have they made a name for themselves? Will I ever hear from any of them again? Even sitting in the remodeled cafeteria, which is very different that the old, brought back memories. I just hope the memories don’t distract me and stifle my ability to do useful research.

Cygnus X-1, a High Mass X-Ray BInary (HMXB) system

Cygnus X-1, a High Mass X-Ray Binary (HMXB) system

High-Mass X-Ray Binaries (HMXBs):

I am working with Dr. Eric Hintz, whom I had met in January at the AAS conference in National Harbor, Maryland. I sat down with him and two REU students, Angel Ritter and Olivia Mulherin, who are also doing astronomy research. He suggested a few projects based on his own research and areas where he has collected data but hasn’t had enough time to analyze it. Angel will be working more directly with Dr. J. Ward Moody and his research assistants, and Olivia and I will work with Dr. Hintz. Olivia decided she wanted to work on a project related to general relativity and gravity waves – a binary system where two stars are spiraling in and are slowing down as they radiate gravity waves.

My project will be to observe and analyze data from high-mass x-ray binary stars in open clusters in the constellation Cassiopeia, including NGC 663 and NGC 659. I will be analyzing data to look for periodicities – cases where the stars vary in a regular pattern and not chaotically. I spent most of the remaining week researching these stars, how they form and evolve, and where they are in these open clusters. Along the way, I found out some fascinating information.

NGC 663 in infrared wavelengths. I created this composite image by combining the WISE 1 (3.4 microns), WISE 3 (12 microns), and WISE 4 (@@ microns) data as Blue, Green, and Red channels, respectively, in Adobe Photoshop.

NGC 663 in infrared wavelengths. I created this composite image by combining the WISE 1 (3.4 microns), WISE 3 (12 microns), and WISE 4 (@@ microns) data as Blue, Green, and Red channels, respectively, in Adobe Photoshop.

HMXBs contain a highly compact, high density x-ray source orbiting around a large B type blue supergiant. For the x-ray source to be there, its original star must have been larger than the remaining B star. It must have been a type O star that has already gone supernova, smashing its remaining mass into a neutron star or black hole, which is now orbiting a center of mass between it and the blue giant. The blue giant is large enough and spinning rapidly enough that it is overflowing its Roche Lobe, throwing off material that forms a ring around the blue giant which is radiating infrared energy. Some of this ring material is pulled into a streamer toward the compact star. As it spirals in, the particles collide and heat up, eventually so hot that they give off x-rays from an accretion disk around the black hole or neutron star. Magnetic fields pull charged particles out of the disk and form jets that travel out along the magnetic poles, plowing through other material and producing radio waves. So HMXBs are messy, complex, dynamic systems that spew out much of the EM spectrum. The only EM band not represented is gamma rays, and even they might be produced occasionally as material falls onto the surface of the neutron star.

NGC 663 with prominent variable stars labeled. The only HMXB I have found from my research so far is V831 in the upper left.

NGC 663 with prominent variable stars labeled. The only HMXB I have found from my research so far is V831 in the upper left.

These systems are also very young, less than 20 million years. This means that these binary systems haven’t had time to move far from their stellar nurseries and can be used as standard candles to more accurately pin down the cluster’s age and distance. When the x-ray source went supernova, the shockwave was asymmetrical, which gave the whole system a kick to the side and pushed it out of its nebulous cocoon into interstellar space. Those binaries that stayed together now have eccentric orbits and show periodic changes in brightness at optical and other wavelengths.

The spectrum of a B-e star at the H-alpha wavelength (6562.8 angstroms or 656.28 nm). The broad absorption band is bisected by a narrow emission band at the same wavelength. The star's atmosphere absorbs the H-alpha light, but the hydrogen gas in the star's ring is emitting H-alpha light.

The spectrum of a B-e star at the H-alpha wavelength (6562.8 angstroms or 656.28 nm). The broad absorption band is bisected by a narrow emission band at the same wavelength. The star’s atmosphere absorbs the H-alpha light, but the hydrogen gas in the star’s ring is emitting H-alpha light.

Hydrogen Alpha and Be Stars:

The B-type stars in these systems that have not yet gone supernova show an unusual feature in their spectrum. Most stars have absorption spectra – the atmosphere of a star will absorb certain frequencies of light coming from the star, making a series of dark lines on the spectrum. This is how we identify the type of star it is – hotter stars have more absorption lines than moderately hot stars (Type A). Very cool red stars have many absorption lines and show a prominent double line for sodium, which hot stars do not show. There are particular series of lines called the Lyman and the Balmer series that represent the quantum leaps of the single electron in hydrogen atoms. One very prominent quantum leap is called the Hydrogen-alpha (Hα) transition, and occurs in the red end of the visible spectrum at 656.28 nm. It represents the absorption of just enough energy for the hydrogen atom’s only electron to jump from the second to the third quantum level (n = 2 to 3). These stars show a prominent, deep red hydrogen alpha absorption line. But right in the middle of the absorption dip is an emission spike. The hydrogen gas in the ring around the B star is being excited by energy from the star and is emitting light like a neon sign. The electrons in the gas ring are falling from the 3rd energy level back down to the 2nd, emitting exactly the same wavelength of energy that the star’s atmosphere is absorbing. These stars are called Be stars (B emission stars).

Target open clusters in Cassiopeia. They are left of Ruchbah and northwest of 44 Cas. All three are part of the same Cassiopeia Stream of gas and dust about 8000 light years away in the Perseus spiral arm.

Target open clusters in Cassiopeia. They are left of Ruchbah and northwest of 44 Cas. All three are part of the same Cassiopeia Stream of gas and dust about 8000 light years away in the Perseus spiral arm.

Deep in Cassiopeia:

The open clusters I will investigate are NGC 659 and NGC 663. Both are located inside Cassiopeia. I did some research on them and found that they are both part of a larger structure of hydrogen gas, dust, and stellar nurseries imbedded in the Perseus Arm of the galaxy, about 8000 light years away from us toward the outer rim. Our solar system is located on the inward edge of the Orion Spur, a branch of the inner Sagitarrius Arm that crosses from the area of Deneb and Cygnus across through Orion. Both clusters are just under the left leg of the Big W in Cassiopeia (the Throne asterism) to the left of Ruchbah and northwest of 44 Cassiopeia.

A Color (B-V) -Magnitude Diagram showing M67 and NGC 188. Both show a turnoff point and strong red giant branch. The stars to the upper left are blue stragglers.

A Color (B-V) -Magnitude Diagram showing M67 and NGC 188. Both show a turnoff point and strong red giant branch. The stars to the upper left are blue stragglers. NGC 188 is slightly older.

Dating a Cluster:

The larger a star is, the faster it consumes its nuclear fuel, converting hydrogen into helium through fusion in the star’s core. While this is going on, we say that the star is on the “Main Sequence” of the Hertzsprung-Russell Diagram, a chart comparing the temperature (or color) of a star versus its intrinsic brightness (or absolute magnitude or luminosity). These H-R Diagrams are also called Color-Magnitude Diagrams, or CMDs. An O-type super giant star is very hot (35,000 °K) and runs out of hydrogen 10-15 million years after forming. It then starts fusing helium into even heavier elements and migrates off the main sequence, becoming cooler and redder as it expands into a red supergiant, like Antares or Betelgeuse. Cooler B and A stars last longer before migrating off the main sequence.

Color-Magnitude Diagram for M55. Once the stars have left the main sequence, they move up and to the right (cooler and brighter) to become red giants. After the helium flash, they migrate across the Asymptotic Giant Branch (AGB) and Horizontal Branch to become blue and hot, then eject their outer layers (if they are the size of our sun) and drop down to become white dwarfs.

Color-Magnitude Diagram for M55. Once the stars have left the main sequence, they move up and to the right (cooler and brighter) to become red giants. After the helium flash, they migrate across the Asymptotic Giant Branch (AGB) and Horizontal Branch to become blue and hot, then eject their outer layers (if they are the size of our sun) and drop down to become white dwarfs.

If you chart a CMD for a cluster of stars such as M67 in Cancer, you will see a pattern similar to the diagram shown here. The bigger, bluer stars have already left the main sequence and migrated to the right. As time goes on, smaller and smaller stars migrate off. You can look at the “turn off” point on the H-R Diagram and get a good estimate of the overall age of the cluster. In the case of M67, the bluer stars are almost all gone except for a few “blue stragglers” that haven’t quite become red giants yet. These are probably stars that were in binary systems where two smaller stars have recently merged into a larger, bluer, hotter star. M67’s turn off point has progressed down into the A and F stars, and it is about 3.8 billion years old with quite a few yellow dwarfs similar in age and composition to our sun. It is unusual that we can still identify it as a cluster – by this time, the stars will usually disperse. NGC 663 and 659, however, are young clusters that are just beginning to turn off, probably about 15-20 million years old, and still rich with stars in the center of the cluster.

A Color-Magnitude Diagram showing the evolutionary tracks for several open clusters. The "Hertzsprung Gap" is also called the Instability Strip - this is where variable stars are found crossing back and forth across the gap as they pulsate.

A Color-Magnitude Diagram showing the evolutionary tracks for several open clusters. The “Hertzsprung Gap” is also called the Instability Strip – this is where variable stars are found crossing back and forth across the gap as they pulsate.

The Instability Strip:

Some stars, as they progress through larger and larger atoms in their cores, will become unstable and start to pulsate. This seems to happen in a particular range of temperature vs. magnitude on the H-R Diagram, a narrow rectangular area known as the Instability Strip. Dr. Hintz is very interested in these variable stars, because they tell us a great deal about stellar dynamics and nucleosynthesis (how new elements are formed). They are also extremely useful as standard candles for measuring distances. Regular variable stars that are truly variable for intrinsic reasons and are not just eclipsing binaries come in several varieties. They are classed according to the period of their variability and its amplitude (how many magnitudes it changes) as well as their size, age, and composition.

The shortest period stars are called “Delta Scooty” stars after the prototype star δ Scuti. They have very short periods on the scale of hours and magnitude changes of from .003 to .9 magnitudes. A well-known star of this type is Altair. Such stars are usually spectral type A to F white giants.

The next class is RR Lyrae stars (pronounced by astronomers here as “RR Laurie”). They are white stars of class A with short periods from .05 to 1.2 days and magnitude fluctuations of .3 to 2.0 v. They are also divided into two classes depending on the metallicity of the star.

The next class of variables is the Cepheids, named for δ Cepheus, with periods of 1 to 70 days and magnitude changes of 0.1 to 2 v. They are orange to yellow-white F to G or K giant stars. There are two types of Cepheids – the first are younger stars with higher metallicity and belong to Population I stars found mostly in the spiral arms of galaxies. Type II Cepheids are older, with less metals, and are usually found in globular clusters and galactic cores. They are sometimes called W Virginis stars. The relationship between the period and the magnitude change of these stars was first mapped out by Henrietta Leavitt and was used by Edwin Hubble to prove that the Andromeda Galaxy was a separate “island universe” from our own Milky Way. Because the intrinsic brightness of the star is related precisely to the period of its variability, and you can see the variability changes from long distances away using a large telescope, you can then work out the distance using a distance modulus formula based on the fact that the apparent brightness of a star varies inversely with the square of its distance.

Finally, there are variables such as RV Tauri, which are yellow to orange giants with periods of 30-150 days and magnitude shifts of 3.0 or more, and there are long period variables such as Mira, which are red giants with periods of 80-1000 days. As you go to brighter and larger stars, they also become cooler and more on the right side of the H-R Diagram. Because they change in brightness, they tend to be on one side or the other but not often inside the instability strip. This tends to produce what seems to be a gap in the H-R Diagram.

Categories of Variable Stars. Extrinsic variables change brightness because of something outside the star blocking light, such as an eclipse or dust. Intrinsic variables change brightness because of changes in the interior of the star.

Categories of Variable Stars. Extrinsic variables change brightness because of something outside the star blocking light, such as an eclipse or dust. Intrinsic variables change brightness because of changes in the interior of the star.

As for what the underlying mechanism is for pulsating stars, it comes down to the elements in the star and how they transmit or block light. In these older stars, helium has begun to build up as core fusion has progressed, and the helium forms layers in the star. As the helium is ionized, it changes opacity. In a normal star, the denser a layer is, the more transparent, so a star will stabilize at a particular size and energy flux. But if internal layers become opaque, as in the case with ionized helium, then the energy coming from core fusion can’t escape and it builds up under the layer. This causes the helium layer to expand and pushes the outer layers of the star with it, making the star larger, brighter, hotter, and bluer. As the built up heat escapes, the helium layer cools and ionization drops, making the layer more transparent and allowing more energy to escape. The layer then cools and shrinks, the gravity of the star compresses the outer layers and the star becomes smaller, dimmer, and redder. Then the cycle repeats. The pulsations are therefore effected by the mass, age, and composition of the star. This is called the Kappa Mechanism or sometimes the Eddington Mechanism, after Sir Arthur Eddington, who first proposed it as an explanation of variable stars.

Prospectus:

It took me several days to do all of this research, and I took quite a few notes in my research journal. My job was then to distill all of this into a working proposal and write it up as a short prospectus, which I am including here: Prospectus-David_Black Dr. Hintz approved it and sent it on to Dr. Turley. The REU students actually get a bonus payment for submitting one, but we RETs do not, as it is worked into our contracts.

My biggest frustration the first week was dealing with actually getting a contract and getting hired, which I’ll talk about in later posts. I also missed my second day (Tuesday, Jun 10) because I was previously booked to give two conference presentations, one at the IT Educators conference at Granite Technical Institute in Salt Lake (I presented on ideas and projects for teaching Python programming) and one at the Utah Association of Charter Schools at the Davis Conference Center in Layton. I presented there on our STEM-Arts Alliance projects (see my other blog at: http://elementsunearthed.com.)

Notes in my science journal.

Notes in my science journal.

The Process of Science:

We tend to teach that science has a fixed “method” that all scientists use in exactly the same sequence of steps. This isn’t a very accurate picture. True, following this method will help ensure you’ve done the right things in the right order, but it doesn’t guarantee you’ll get any useful results. And, of course, real science is never that neat or cut and dried. No one sits down and writes up a formal hypothesis, etc. But you do start with a question and then follow your nose, and along the way, if you are thorough and think things through well enough, you can collect some useful data that actually means something and could answer your question. It’s a much more organic and messy process than what we teach kids in elementary and middle school and force them to follow on their science fair projects.

I hope through the next several posts to describe just what astronomy is like as a science and the kinds of activities astronomers do. They don’t spend all of their time staring through telescopes, and never did. I used to think to be an astronomer you needed to have good eyesight, and that misconception kept me from pursuing astronomy as a career. But all astronomy is done with cameras, not raw eyesight, and uses sophisticated software and some excellent thinking to compare one star with another.

Nearest Star Clusters and Nebulas. M103 in the upper right is near NGC 663 and 659 and about 8000 light years away.

Nearest Star Clusters and Nebulas. M103 in the upper right is near NGC 663 and 659 and about 8000 light years away.

So the first step of science is to know enough about your general area of study to know what is knowable, what is known, and what is not yet known. You’ve got to do some background research to understand what even constitutes a decent question. So that’s what I did this week – found out as much as I could about Be stars, HMXBs, and variable stars. I thought I already had a good knowledge of these things, but I’ve learned so much more. Now I can begin to formulate a question or objective for my research: Do HMXBs show periodicity, either extrinsically or intrinsically? Can I detect and characterize these variations? Beyond that, can I learn the process of astronomic observation, data reduction, and analysis? Can I translate what I’ve learned to the high school level so my students can share in this experience and do authentic research themselves?

Stained glass window for the College of Physical and Mathematical Sciences at BYU. I am officially an Adjunct Research Faculty member for the summer.

Stained glass window for the College of Physical and Mathematical Sciences at BYU. I am officially an Adjunct Research Faculty member for the summer.

I’m sure I will take detours and diversions along the way, following rich veins of possibility. Some of these will peter out and lead to dead ends, blind alleys, and box canyons of knowledge. I’ll have to backtrack and try a different direction. Perhaps some of the best things I’ll learn will be accidental or serendipitous – something I never anticipated and couldn’t have planned for but just happened to be at the right place and time to learn. I’ve already come across one of these that I hope to follow up on in the next few weeks. This entire experience will be a messy, uncertain, and at times frustrating process.

Welcome to real science.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

An Article for The Science Teacher

Title frame from my video explaining the nearby star model activity.

Title frame from my video explaining the nearby star model activity.

Over a year ago I wrote an article for The Science Teacher, the journal of the National Science Teachers Association geared for high school teachers. I’ve known two of the authors of a column for that magazine, called Science 2.0, about technology tools for teaching science. They are Martin Horejsi and Eric Brunsell, whom I got to know through the NASA/JPL Solar System Educators Program back in 2000 to 2004. I also had a chance to sit in on a focus group for TST two years ago at the NSTA conference in Indianapolis, where I met the editor, Steve Metz. I decided then that I should write an article at some point, and with a lot of effort, I finally did.

Cover of The Science Teacher, Summer, 2014. My article is inside!

Cover of The Science Teacher, Summer, 2014. My article is inside!

The article is on an activity I do in my astronomy classes: to build an accurate 3D model of the nearby stars out to 13 or more light years. I’ve described this activity in detail in a previous post (http://spacedoutclass.com/2013/05/05/the-nearby-stars/). For the article, I wrote up an abridged description, edited the graphics I’d made for my original lesson plan, and followed the manuscript submission process online.

These articles are peer reviewed, but it took several tries to find reviewers who would look over the article, so it was delayed about five months. I’d almost given up hope, when I suddenly received a message and the review notes from Steve in November. The review had good suggestions, plus a note about another article written by Tracy Furutani, a professor at a college in Washington, who had authored an article in Astronomy Education Reviews with a similar activity. I had to address how my version was different and (I believe) better. I had not ever heard of this article; I had known of a small model written up by the NASA Advanced Concepts group, but came across that after I had developed my own model.

Title page for my article in The Science Teacher magazine, Summer, 2014.

Title page for my article in The Science Teacher magazine, Summer, 2014.

The main differences are that my model hangs from a platform, which in turn hangs from the ceiling and allows more precise positioning. The other models are built from the ground up, either sitting on straws or on wires. It would be very difficult not to bump the stars and knock them out of position with a floor model; mine keeps the stars in place by gravity. Mine also uses trigonometry to find the stars’ positions more accurately.

Pages 32-33 of my article. I created all the graphics and captions as well as writing the article.

Pages 32-33 of my article. I created all the graphics and captions as well as writing the article.

My revisions were complete by the deadline during the first week in December. Then the article went to editors to review. Mine was Steve Stuckey, and the edited version was definitely shorter and better than my original, with some of the detailed materials, such as follow up assignments, moved to a website for download. Once the writing was done and we all agreed on it, I worked on final versions of the graphics. I changed the fonts, added more details, and cleaned up the line art.

Pages 34-35 of my article for The Science Teacher.

Pages 34-35 of my article for The Science Teacher.

By late April, the initial layout was done and Steve sent me a draft version with graphics, photos, etc. in place. I could only see one change, which was to add my middle initial to my name. There are too many other David Blacks out there, so I usually use my middle initial, which will bring people right to me in a Google search.

P{ages 36-37 of my article for The Science Teacher in the Summer, 2014 edition.

P{ages 36-37 of my article for The Science Teacher in the Summer, 2014 edition.

Steve also requested that I create a video explaining the activity, which would be included with the online iPad version. So in between all the other craziness at the end of April and start of May, I filmed myself explaining what the activity was for, why it was a good idea, how to build the platform, how to make the stars, and additional ancillary activities and materials teachers can use. I recorded it in the science lab, which has bad resonance, but the video was at least of good quality and decently lit. I edited the pieces together without too much fancy stuff (Steve didn’t want much of that) and submitted it via Dropbox. It wound up being too big of a file, so I had to compress it and resend. This was all by May 20th or so, right before graduation (I was working on three other videos and a grant application at the same time).

A still frame from the video I made explaining the 3D star model activity. I'm demonstrating how to make and hang the stars.

A still frame from the video I made explaining the 3D star model activity. I’m demonstrating how to make and hang the stars.

Yesterday the magazine arrived and there was my article starting on Page 31. It has been a very long road since I first attempted to do this activity back in 1993 at Juab High School. I’ve done it many times since, including at my NASA Explorer Schools workshops at JPL, at Provo Canyon School, and now at Walden School of Liberal Arts. I’ve made a smaller scale model that I take to workshops as well as the full-scale model’s stars and platform.

There's actually a rather clever pun here . . . In college, I sang this poem as a song in BYU's Oratorio Choir. The middle photo on this inserted page is of M16, the Eagle Nebula. I took this photo myself using the 24 inch reflector at Mt. Wilson Observatory as part of the TIE (Telescopes in Education) program.

There’s actually a rather clever pun here . . . In college, I sang this poem as a song in BYU’s Oratorio Choir. The middle photo on this inserted page is of M16, the Eagle Nebula. I took this photo myself using the 24 inch reflector at Mt. Wilson Observatory as part of the TIE (Telescopes in Education) program.

Since the last time I had my students make the model in class, two more star systems have been discovered by Kevin Luhman, an astronomer with Pennsylvania State University’s Center for Exoplanets and Habitable Worlds. These systems are now considered to be the 3rd and 4th closest star systems to us.

A list of the nearest star systems. Since I published this, a new star system 7.2 light years away has been discovered.

A list of the nearest star systems. Since I published this, a new star system 7.2 light years away has been discovered.

He used data from WISE (the Wide-field Infrared Survey Explorer) to locate the first system, now called Luhman 16 or WISE 1049-5319. This system is binary, with a late type L star and a T type star orbiting each other. Although they are called brown dwarfs, their actual color would be closer to magenta. Observations with large telescopes have shown that Luhman 16b has a fairly plain atmosphere but that Luhman 17b has a crazy, turbulent atmosphere with hot and cold patches where it could be raining silicates and molten iron. They were able to determine the temperature and appearance of the atmosphere using Doppler imaging techniques. Here’s an article about it: http://www.pbs.org/wgbh/nova/next/space/the-first-weather-map-of-a-brown-dwarf/

A surface map of the brown dwarf star Luhman 16B, created by Doppler imaging.

A surface map of the brown dwarf star Luhman 16B, created by Doppler imaging.

Luhman 16a and b have an orbital period of 25 years. Astrometric observations (measuring the exact position of a star compared to others in the same field) show a wobble in the stars’ paths that may indicate a large Jupiter-class planet orbiting one of them at about 3 AUs. This system is only 6.6 light years away, in the constellation Vela. It is has knocked Wolf 359 out of its standing as the third closest star system to Earth after Alpha Centauri and Barnard’s Star. It’s also the closest system to Alpha Centauri.

A comparison of different sizes and colors of stars. The large yellow disk at the left is our sun. The next star is an M5-6 red dwarf. The next is an L-class brown dwarf. The next is a T-class brown dwarf, which is actually more magenta in color. The far right object is Jupiter. Notice Jupiter is actually a bit larger than the red or brown dwarfs, but it is much less dense. The T-class brown dwarf is at least 13 times the mass of Jupiter, and has just enough mass and density to ignite deuterium fusion in its core. But what of the objects between Jupiter and L-class stars? Are they really stars if no fusion occurs?

A comparison of different sizes and colors of stars. The large yellow disk at the left is our sun. The next star is an M5-6 red dwarf. The next is an L-class brown dwarf. The next is a T-class brown dwarf, which is actually more magenta in color. The far right object is Jupiter. Notice Jupiter is actually a bit larger than the red or brown dwarfs, but it is much less dense. The T-class brown dwarf is at least 13 times the mass of Jupiter, and has just enough mass and density to ignite deuterium fusion in its core. But what of the objects between Jupiter and L-class stars? Are they really stars if no fusion occurs?

Kevin Luhman has also discovered another nearby brown dwarf just this year (2014), called WISE 0855-0714. This one so small and cool that it should probably be classified as a sub-brown dwarf or Super Jupiter (Super Juper?) and could be a rogue planet instead of a star, with 3-10 times the mass of Jupiter. Since a mass of 13 or more Jupiters is needed for deuterium fusion, this object cannot really be considered a star. It’s surface temperature is estimated to be 225 to 260°K, or -48 to -13°C, about the temperature of a balmy day in Antarctica. It has a high proper motion and large parallax, with a distance of about 7.2 light years, and is located in the constellation Hydra. Which means Wolf 359 has gotten kicked down to fifth place. Here’s an article about this rather cool object: http://www.nasa.gov/jpl/wise/spitzer-coldest-brown-dwarf-20140425/#.U6nK5M1Tw0M

Distances of the Sun's closest neighbors. The next star out (at least for now) is Wolf 359.

Distances of the Sun’s closest neighbors. The next star out (at least for now) is Wolf 359.

All of this goes to show that we don’t yet know everything there is to know about our stellar neighborhood. Entire star systems have been hiding in plain sight, with some amazing characteristics. My article is already obsolete, and it just came out yesterday.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

Recent Developments: Spring, 2014

An RGB combined image of one of our possible targets for NITARP. This image takes the 4.6 micron filter as blue, the 12 micron filter as green, and the 22 micron filter as red.

An RGB combined image of one of our possible targets for NITARP. This image takes the 4.6 micron filter as blue, the 12 micron filter as green, and the 22 micron filter as red.

I haven’t written any posts recently for this blog for several reasons: partly because I’m not actively teaching astronomy or astrobiology this semester (winter 2014) and partly because I’ve been so busy with so many things that I haven’t had time to stay up to date. I’ve written several grants, traveled to Boston to present at the National Science Teachers Association conference, taken an online class to prepare for our move to an International Baccalaureate school, finished a video for the Loveland Living Planet Aquarium, etc. But in the midst of all this craziness there have been developments related to astronomy education. I will explore each of these in more detail in later posts, but for now here’s a rundown/summary of what’s happening in my life:

BYU-RET Program:

I met Dr. Eric Hintz of Brigham Young University’s Physics and Astronomy Department while at the AAS meeting in Washington, D.C. this January. He told me more about the NSF funded Research Experiences for Teachers (RET) program at BYU. As soon as I returned home I filled out the online application. I was notified in April that I have been accepted into the program and will work with Dr. Hintz for a 10-week period this summer. It carries a stipend roughly equal to my salary at Walden School and I will be an Adjunct Research Faculty there for the summer. I get to be a professional astronomer and make some extra money, plus bring back $1200 worth of equipment to my classes!

22 micron filter image of the same target. It has been inverted to better see the stars.

22 micron filter image of the same target. It has been inverted to better see the stars. The blue circle is the target coordinates, but in this case, the cluster of stars has created source confusion in the software. The bright star to the east (left in this image) is probably the actual red giant we want to study. It gets brighter in WISE 1 through 4 whereas the other stars get dimmer. This is also a great deal of nebulosity, as one would expect for a star cluster.

Air Force Association Award:

In my frequent search for opportunities in aerospace education, I looked through the Air Force Association website. I had received a small grant from them several years ago that helped fund our science demonstration/showcase program at Walden, where students develop lesson plans and demonstrations that they present to classes in our elementary school and to the public at an evening Science Showcase. Looking through the site, I came across an application for a Teacher of the Year award available through each chapter of the AFA. There are three chapters in Utah, and given my background with NASA educational programs, I figured I had a good shot. I applied in February and received a phone call from Grant Hicenbotham on April 17 (right in the middle of parent teacher conferences) that I had won the Salt Lake Chapter award and the State AFA Teacher of the Year award. There would be a cash award and a salmon barbeque dinner at the Hill Aerospace Museum on June 18.

NITARP Training and Tasks:

Since we have now chosen our topic to study for NITARP (red giant stars that may be consuming their own planets), our next step is to develop a solid list of target stars to study. The four teachers (myself, John, Stef, and Elin) have held telecons with Dr. Rebull each Monday at 5:00 to write up our proposal, develop a master list from three previous papers that studied stars that had excess lithium and unusually fast rotation. We’re going to take the next step and look to see if the same stars have an excess of IR radiation in bands that would indicate a shell or ring of dusty debris orbiting around it, leftover from planets that have been pulled apart. But not all of the stars on the lists for the respective papers are good candidates (or possibly not even stars). So after we merged the lists, we’ve gone through each star in Finder window of IRSA, which is a software package that allows the images in IPAC to be loaded and analyzed.

Sample of the merged list of stars - yellow areas are stars I'm assigned to analyze for our final decision (all of us did all the stars for the first run through). Pink areas are the average and standard deviation of ratings.

Sample of the merged list of stars – yellow areas are stars I’m assigned to analyze for our final decision (all of us did all the stars for the first run through). Pink areas are the average and standard deviation of ratings.

We’re searching through several missions, including DSS (the Digitized Sky Survey, an Air Force mission to identify natural IR sources to prevent heat-seaking missiles from getting spoofed by false targets), 2MASS (the Two-Micron All Sky Survey), WISE (Wide-field Infrared Survey Explorer), and IRAS (Infrared Astronomy Satellite). It’s been a tedious job to analyze the images and see if we have a good point source – not too bright or saturated, but without other stars overlapping the target star. The IRAS data and early DSS data was gathered several decades ago and some of the stars have high enough proper motions that they have drifted in the field of view. Some of the images have so many sources clumped together in star clusters that the software that did the data reduction got confused. In some cases we have nebulosity or other sources of contamination. As of this writing in mid-June we are going through the list of worrisome stars and deciding which ones to drop.

The return window in IRSA Finder Chart. It is displaying the same coordinates (which are typed in) for DSS, 2MASS, and WISE. In this case the IR source is a planetary nebula surrounding the target star.

The return window in IRSA Finder Chart. It is displaying the same coordinates (which are typed in) for DSS, 2MASS, and WISE. In this case the IR source is a planetary nebula surrounding the target star.

Meanwhile, I have begun training the students who will go with me to Caltech at the end of July. I’ve been showing them the software and databases, explaining what we will be doing in more detail, and preparing them. We’ve taken a break because two of the three were gone on an expedition to India for three weeks but are now back safely. Once we get to Caltech, we’ll learn data reduction procedures, how to do photometry at different wavelengths for the target stars, and how to chart all of this as a Spectral Energy Distribution (SED) curve. Hopefully, something will come out of this analysis that we can draw conclusions from.

An Article for The Science Teacher:

Over a year ago I wrote and submitted an article to The Science Teacher, NSTA’s journal for high school teachers. It was finally accepted, and is now in print. But I’ll explain more about this in my next post.

So no rest for the weary, onward and upward, and no matter where you go, there you are.

Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment

The State of the Universe

The Rayburn House Office Building on Capitol Hill

The Rayburn House Office Building on Capitol Hill

Room 2325, where the State of the Universe Briefing was held

Room 2325, where the State of the Universe Briefing was held

On January 9th, the last day of the American Astronomical Society conference in Washington, D.C., I had the opportunity to do something quite unusual. I attended a briefing on the State of the Universe presented by the President of AAS and several noted science education experts. It was held in Room 2325 at the Rayburn House Office Building on Capitol Hill. Here is the flyer I got describing it:

aasbriefing_flyer_9jan2014

How I got involved in this is a bit convoluted, which is how these things usually are. Dr. Luisa Rebull, the director of NITARP (which is the program that brought me to the AAS conference in the first place) had been asked if one of the teachers participating in the program could testify at a briefing before congress. It was to be held concurrently with the AAS conference, since the conference was just outside Washington at the Gaylord resort at National Harbor, Maryland. Luisa sent the e-mail on to us and of course I volunteered. Sounded like a fun opportunity. I wasn’t chosen to speak, but as I was one of the first to respond, I was offered the chance to attend the meeting anyway.

I woke up and packed my bags, since I would not be able to return to the hotel. I checked out at the front desk and waited by the front door for the others to arrive. This shindig was planned by Josh Shiode, a public policy intern with the AAS. Several other NITARPers, their students, and SOFIA AAAs and EPO personnel were with us. Hotel cars and drivers loaded us up and drove us up the Maryland bank of the Potomac until we pulled off onto Capitol Hill and unloaded on Independence Ave. in front of the Rayburn Building. I had my luggage, computer bag, and camera with me.

High School students getting badges

After we went through security, we elevated upstairs and walked down the marble corridor to 2325, which is one of the Science, Space, and Technology Committee rooms. I stashed my luggage under the refreshment table and got my badge. We had some time to kill before the briefing actually started, so I chatted with some of the participants and took photos.  I hadn’t had breakfast and we were asked not to eat any of the refreshments until the congressional staffers and others arrived. I got shaky enough that I had to sneak a couple of cookies.

Before the Briefing

Before the Briefing

The other guests finally arrived and we could start eating. Since I was the only one with a decent SLR camera, Josh asked me to take some pictures of the speakers. The room was full to overflowing, with people standing up. The meeting was introduced by Senator Lamar Smith of Texas, who spoke of his love for college physics and astronomy courses and how his orange tabby cat is named Betelgeuse. Dr. Megan Urry, President Elect for AAS, introduced the speakers.

Dr. David Helfand, showing the famous Apollo 8 photo of Earthrise over the Moon. This photo changed our whole viewpoint of Earth.

Dr. David Helfand, showing the famous Apollo 8 photo of Earthrise over the Moon. This photo changed our whole viewpoint of Earth.

Dr. David Helfand, President of AAS, was the lead speaker. He spoke on the State of the Universe, and showed slides comparing what we know now with what we knew 45 years ago when he took astronomy in college. We have truly discovered a great deal in what will probably be known as a Golden Age for astronomy. But this Golden Age might be drawing to a close as reduced budgets slow the pace of discovery. Here is his Powerpoint with the slides from his remarks:

the_state_of_the_universe_2014

Ari Buchalter and Dr. David Helfand at Columbia College

Ari Buchalter and Dr. David Helfand at Columbia College

Ari Buchalter, Chief Operating Officer of MediaMath, a business marketing and digital advertisement analytics firm, spoke on the importance of STEM education and science literacy for all areas of business and society. He received a PhD in astronomy from Columbia University (where he worked with Dr. Helfand) and developed programs to analyze data from radio telescopes that mapped the Big Bang at Caltech. He then went into business software development and found his ability to think logically, to problem-solve, to program computers, and to work with data helped him develop their analytical tools. As a computer technology teacher, I have actually heard of him before. He is a big proponent of teaching computer programming in K-12 schools. Here are his notes for his remarks:

aribuchalterremarks_forweb

Blake Bullock and Ari Buchalter at the State of the Universe briefing

Blake Bullock and Ari Buchalter at the State of the Universe briefing

Blake Bullock, Business Development Director for Civil Air and Space at Northrup Grumman, spoke on how she has used her knowledge of STEM fields as she led the team that designed and built the James Webb Space Telescope (JWST). Because it will need a mirror much larger than the Hubble Telescope’s, it can’t fit into any existing rocket, so the mirror had to be made in segments that can fold up. Since it is an infrared telescope, it must be kept extremely cold, so they had to develop five layers of sunshades, each the size of tennis courts, that could unfurl after launch. To detect the formation of the first galaxies, it required instruments more sensitive than ever built, which required new technologies that are already being used in other businesses. For example, the mirrors have to be precisely ground. If one segment were blown up to the size of Texas, the imperfections would be about the size of a grasshopper. The device invented just to measure the curvature of the mirrors for JWST is now being used to diagnose eye disease. The JWST has certainly been beneficial to Utah, since the primary mirror segments are made of beryllium, the only metal light enough and tough enough to work in such a large space telescope. And the only source of beryllium ore is in Utah. Here are Blake’s notes:

blakebullockremarks_forweb

Peggy Piper before the briefing

Peggy Piper before the briefing

The final speaker was Peggy Piper who, like me, is both a SOFIA Airborne Astronomy Ambassador (Cycle 0 in her case) and has participated several times in NITARP. She is a high school teacher from Wisconsin and is now transitioning into an informal educator at Yerkes Observatory. She told of how she became involved with Yerkes and how that led to bringing astronomers into her classroom, which led to her involvement with NITARP and SOFIA. She gave examples of students who have been inspired by these programs and developed skills and abilities in math, science, and computers they never had before. Here are Peggy’s remarks:

peggypiperremarks_forweb

Peggy Piper speaking at the State of the Universe briefing, Jan. 9, 2014.

Peggy Piper speaking at the State of the Universe briefing, Jan. 9, 2014.

I don’t know what impact we made on people in the room. Most of the members of the Science, Space, and Technology Committee did not attend personally, but sent their aides and staff members. The overall message – that investing in astrophysics and STEM in general is of great benefit to our country – may have fallen on deaf ears. But maybe not. Much that happens in congress is “for the record” and is said not because anyone is listening but because it must be officially said. This was the official position statement of the American Astronomical Society regarding the need for astronomy research in the United States. At least I can say I was there, wearing my SOFIA flight jacket and flying the flag for STEM education.

Dr. Meg Urry, President Elect for AAS, speaking with Wendi Lawrence and high school students at the State of the Universe briefing, Jan. 9, 2014.

Dr. Meg Urry, President Elect for AAS, speaking with Wendi Lawrence and high school students at the State of the Universe briefing, Jan. 9, 2014.

I took some more photos after the session was over, then got my luggage out from under the refreshment table and headed back outside. I had arranged for my airport shuttle to meet me on the steps of the Rayburn Building on Independence Avenue. Two ladies asked me to take their photo with the Capitol Building in the background, so I asked them to return their favor. It was good to be back on Capitol Hill as something other than a tourist. It’s been a long journey since I was a Congressional Intern here in 1982.

David Black with the U. S. Capitol Building, Jan. 9, 2014.

David Black with the U. S. Capitol Building, Jan. 9, 2014.

It was a short drive to Reagan International Airport and security took no time at all to get through. I got a Dunkin Donut while waiting and worked on blog posts. I wound up sitting across the aisle from Dr. Eric Lindt from BYU whom I had met at the conference and whom I hope to get a chance to work with. The flight was uneventful but long, having to sit in the same seat for four hours. I was glad to have an aisle seat on the left side of the plane so I could stretch out my right leg. My wife and two youngest children picked me up at the airport.

It was a great conference and expanded my knowledge and allowed me to rub shoulders with the leaders of the astronomy community. Now I must pour myself back into my normal life as if nothing has changed. But I can’t help but think that we’ve come a long way from what we knew at the beginning of the space race, and that the destiny of humanity still lies in space.

Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment