Wednesday, August 22, 2012

Fit those spectra!!!

Ignoring the continuum normalization (that part of my code still has issues... more on that later) and giving my "data" spectra (yellow) a doppler shift of 15 km/s, SNR (signal-to-noise ration) of 3000, and a line broadening of 0.1:

Before any fitting:

So, what I do in the fitting is I transform the model as if it had been broadened and shifted by some given amount, then evaluate the "likelihood" (this process is the Bayesian methodology that allows minimizing chi squared as a fitting mechanism...) at those values. Then, the Levenburg-Marquardt algorithm finds the highest likelihood value, which should be the correct values of doppler shift and line broadening.

After cross-correlation (to find an initial guess) and Levenburg-Marquardt:
Let's zoom in on that fit:




 What??? I can't see the data, the values for the transformed model are practically the same!! The fit is too good! Where is it? Zoom in further to see the slight discrepancy due to noise:



Fitting algorithm estimates a doppler shift of 14.9999878 (I expected 15) and a line broadening of 0.0999971716 (with no prior information!! I started the line broadening at 0, and expected to get 0.1).

Not bad!


Wednesday, August 8, 2012

Analogous Inspiration


 When I was a young kid (4-10), I wanted to be an astronaut. Then, suddenly, my ‘dream’ changed – I started telling people that I wanted to be a lawyer. I always thought that the reason for this change was that my dad always said that being a lawyer was a good career, but going through old papers while I was at home made me realize maybe this wasn’t the whole story.

I found a “newspaper” that I had written in fourth grade. Our class had a variety of student-run “businesses”: some people made crafts out of paper and traded them for rocks others collected on the playground. To be a part of this economy, I wrote a sporadic “newspaper,” proudly typed up in Microsoft Publisher and printed on long paper (to be more authentic), which I traded to get my share of rocks and paper bracelets. The teacher tried to stop us, as we used a lot of classroom supplies, but, I digress…

One of these “newspaper” “issues” contained a story about the space shuttle Challenger and its 1986 explosion. Reading it now, I am brought back memories of how shocked I was when I discovered that this had happened, that seven people had died. Even though I was reading about it fourteen years after the fact, I felt as if those people had just died. With my father away in Afghanistan, I was profoundly struck by the mortality of these people I saw as heroes (no doubt there was a fearful analogy in my fourth-grade mind to my father, who I also saw as my hero).

I am reminded of this shred of my past by the resoundingly successful landing of Curiosity on Mars, an emotional event I watched while observing at Palomar Observatory. Early in the evening on the day of the scheduled landing I excitedly texted my uncle, whose ten-year-old son has repeatedly told me that he wants to go to Caltech someday, to let him know about the landing.

After the message was sent, however, I worried, perhaps irrationally, about what would happen if Curiosity were to crash to the surface in a fiery mess instead of land as it was supposed to. Would my cousin lose faith in Caltech? NASA? Science? I remembered all too clearly my own first impression of science’s imperfections, and I was worried that were Curiosity to fail, it would have some momentous effect on my cousin’s life plans.

My fears were unfounded. Curiosity landed on the surface of Mars safely, and as I watched the jubilant celebration of the involved scientists and engineers on a laptop, in the data room of the Hale 200-inch, I imagined my cousin watching this celebration and how much this success would inspire him. I imagined him tracking the progress of the MSL over the next weeks, months, or even years as I voraciously consumed everything I could about Voyager when I learned about it as a child. I was fascinated with the ideal that somewhere, something we humans had made was doing science we couldn’t do ourselves. As I traced my fingers along the a map of our solar system with Voyager’s trajectory overlaid, so I hope will my cousin will someday have a map of Mars along which to trace a similar path.

Curiosity is inspirational. Now, I don’t need an inspiration to spur me towards science, having found my own way, but the awe Curiosity inspires might lead many children to follow it into a field of science.

There is also something else that I have realized in the ten years since I wrote that sad article about Challenger. There can be no progress without risk. Those imperfections that manifest as disaster are not failure, but merely poorly aimed steps forward.

Wednesday, August 1, 2012

Coding (more) Efficiently in Python! Part 1: prun

Since it’s been a while, I think I will write about how to make your python code more efficient! (Also, writing this up means I can stop being stuck on my project for a few minutes… hence, thus I posit my ulterior motive…)

Having never had formal training in coding (barring a high school course, in which we spent several months on ‘Hello World’…. So, no formal training), I am what I term a ‘functional coder.’ I can write code. But that’s about all. It might run and get the job done, but when you look at the actual raw code you will see recursive for loops and a stark lack of commenting, variable names like ‘bunny’ and ‘widget’ (hey, it seemed cute at the time! Who knew I would need to come back to this code later?) and almost every other mark of a rookie you can think of. This summer, this has begun to haunt me; I need to run little chunks of code thousands of times as I approach my correct parameters in a Markov Chain Monte Carlo (this process I am also trying to make more efficient, but, more on that later).

I started out knowing very little about making code efficient. Well, I did realize in Ay117 (the astrostats course I took this past spring) that my five-deep for loops could cause laughter when shown to others. I don’t want to be a comedian, though: I want to be a competent coder!

Tim (a graduate student in Professor Johnson’s group) showed me a couple cool tricks earlier this summer in making code more efficient, all of which I have been using ever since! I will show here my personal favorite diagnostic tool, prun.

prun is a function that is part of the ‘core.magic’ module (so, no need to import anything! It’s super easy!). You call is by simply writing, either on your command line on in a developing environment:

%prun functionname

When functionname is the function you want to test. There are also some options that you can insert as follows, when 'o' is the identifier of your option: documentation of prun.

This is what you might see when running prun from the command line.


For me, since I will be calling this function many, many times, I want to try and eliminate whatever is taking the most time. I see that the total time for this function is 1.584 seconds, which seems pretty good, right? Only two seconds? You can wait for two seconds for your function to run, right?

That is what I thought at first, too. 1.5 seconds seems fine at first glance, but the nature of my project is such that I will need to run this particular method hundreds or even thousands of times. Suddenly, a couple seconds isn’t sounding so good anymore… luckily, prun separates out for me which functions are taking the most time to run, and how many times they are being called. By looking at this chart generated by prun, I can see that the functions taking the most time are the interpolation and shift_lhood, I function I wrote. There are two easy ways to fix that! Firstly, you might notice that the interpolation it being called 11 times. By examining the code, I notice that the same function is being interpolated 11 times. Silly me, I hadn’t realized that I did that… well, that’s an easy fix!

The second thing I can do to improve the efficiency of this function is to improve my shift_likelihood function. The old version took, as according to the profile shown above, 0.121 seconds PER CALL to run (see the fourth row, fifth column). Here is the improved version of that function, shift_lhood, called on its own. To improve it, I tried using matrices instead of for loops to do element by element manipulations. Now it only takes 0.093 seconds! What an improvement!




prun is a useful function, and it’s my first step on the long journey to becoming a better coder (and not spending days and unneeded days running code)!