Wednesday, August 24, 2016

Proxima Centauri's planet and the hazards of cool animations

It was officially announced today that Proxima Centauri has a potentially earthlike planet.  That's great, especially for fans of science fiction.  Here is a relevant video by Nature:

Did you spot the mistake?  The scientists discovered the planet by seeing the wobble in the star's motion (measured by painstaking spectroscopy of the starlight, and using the Doppler shift of the spectrum to "see" the tiny motion of the star).  The animation tries to show this at 0:55-1:12.  The wobble is because the star and planet actually orbit around a common center of mass located on the line between them.  Instead, the video seems to show the center of mass of the star+planet tracing out a circle around empty space.  Whoops.   Someone should've caught that.  Still an impressive result.

Update:  The makers of the video have updated with a link to a more accurate animation of the Doppler approach:

Tuesday, August 23, 2016

Statistical and Thermal Physics

Eight years ago I taught Rice's undergraduate Statistical and Thermal Physics course, and now after teaching the honors intro physics class for a while, I'm returning to it.   I posted about the course here, and I still feel the same - the subject matter is intellectually very deep, and it's the third example in the undergraduate curriculum (after electricity&magnetism and quantum mechanics) where students really need to pick up a different way of thinking about the world, a formalism that can seem far removed from their daily experience.

One aspect of the course, the classical thermodynamic potentials and how one goes back and forth between them, nearly always comes across as obscure and quasi-magical the first (or second) time students are exposed to it.  Since the last time I taught the course, a nice expository article about why the math works has appeared in the American Journal of Physics (arxiv version).  

Any readers have insights/suggestions on other nice, recent pedagogical resources for statistical and thermal physics?  

Sunday, August 14, 2016

Updated - Short items - new physics or the lack thereof, planets and scale, and professional interactions

Before the start of the new semester takes over, some interesting, fun, and useful items:
Update:. This is awesome.  Watch it.
  • The lack of any obvious exotic physics at the LHC has some people (prematurely, I suspect) throwing around phrases like "nightmare scenario" and "desert" - shorthand for the possibility that any major beyond-standard-model particles may be many orders of magnitude above present accelerator energies.  For interesting discussions of this, see here, herehere, and here.  
  • On the upside, a recent new result has been published that may hint at something weird.  Because protons are built from quarks (and gluons and all sorts of fluctuating ephemeral stuff like pions), their positive charge has some spatial extent, on the order of 10-15 m in radius.  High precision optical spectroscopy of hydrogen-like atoms provides a way to look at this, because the 1s orbital of the electron in hydrogen actually overlaps with the proton a fair bit.  Muons are supposed to be just like electrons in many ways, but 200 times more massive - as a result, a bound muon's 1s orbital overlaps more with the proton and is more sensitive to the proton's charge distribution.  The weird thing is, the muonic hydrogen measurements yield a different size for the proton than the electronic hydrogen ones.  The new measurements are on muonic deuterium, and they, too, show a surprisingly smaller proton than in the ordinary hydrogen case.  Natalie Wolchover's piece in Quanta gives a great discussion of all this, and is a bit less hyperbolic than the piece in ars technica.
  • Rumors abound that the European Southern Observatory is going to announce the discovery of an earthlike planet orbiting in the putative habitable zone around Proxima Centauri, the nearest star to the sun.  However, those rumors all go back to an anonymously sourced article in Der Spiegel.  I'm not holding my breath, but it sure would be cool.
  • If you want a great sense of scale regarding how far it is even to some place as close as Proxima Centauri, check out this page, If the Moon were One Pixel.
  • For new college students:  How to email your professor without being annoying.
  • Hopefully in our discipline, despite the dire pronouncements in the top bullet point, we are not yet at the point of having to offer the physics analog of this psych course.
  • The US Department of Energy helpfully put out this official response to the Netflix series Stranger Things, in which (spoilers!) a fictitious DOE national lab is up to no good.  Just in case you thought the DOE really was in the business of ripping holes to alternate dimensions and creating telekinetic children.

Monday, August 08, 2016

Why is desalination difficult? Thermodynamics.

There are millions of people around the world without access to drinkable fresh water.  At the same time, the world's oceans contain more than 1.3 million cubic kilometers of salt water.  Seems like all we have to do is get the salt out of the water, and we're all set.   Unfortunately, thermodynamics makes this tough.  Imagine that you have a tank full of sea water and magical filter that lets water through but blocks the dissolved salt ions.    You could drag the filter across the tank - this would concentrate the salt in one side of the tank and leave behind fresh water.  However, this takes work.  You can think about the dissolved ions as a dilute gas, and when you're dragging the membrane across the tank, you're compressing that gas.  An osmotic pressure would resist your pushing of the membrane.  Osmotic effects are behind why red blood cells burst in distilled water and why slugs die when coated with salt.  They're also the subject of a great Arthur C. Clarke short story.

In the language of thermodynamics, desalination requires you to increase the chemical potential of the dissolved ions you're removing from the would-be fresh water, by putting them in a more concentrated state.   This sets limits on how energetically expensive it is to desalinate water - see here, slide 12.   The simplest scheme to implement, distillation by boiling and recondensation, requires coming up with the latent heat of the water and is energetically inefficient.  With real-life approximations of the filter I mentioned, you can drive the process, called reverse osmosis, and do better.  Still, the take-away message is, it takes energy to perform desalination for very similar physics reasons that it takes energy to compress a gas.

Interestingly, you can go the other way.  You know that you can get useful work out of a gas reservoirs at two different pressures.  You can imagine using the difference in chemical potential between salt water and fresh water to drive an engine or produce electricity.  In that sense, every time a freshwater stream or river empties into the ocean and the salinity gradient smooths itself by mixing of its own accord, we are wasting possible usable energy.  This was pointed out here, and there is now an extensive wikipedia entry on osmotic power.

Saturday, July 30, 2016

Ask me something.

I realized that I haven't had an open "ask me" post in almost two years.  Is there something in particular you'd like me to write about?  As we head into another academic year, are there matters of interest to (grad or undergrad) students?

Sunday, July 24, 2016

Dark matter, one more time.

There is strong circumstantial evidence that there is some kind of matter in the universe that interacts with ordinary matter via gravity, but is otherwise not readily detected - it is very hard to explain things like the rotation rates of galaxies, the motion of star clusters, and features of the large scale structure of the universe without dark matter.   (The most discussed alternative would be some modification to gravity, but given the success of general relativity at explaining many things including gravitational radiation, this seems less and less likely.)  A favorite candidate for dark matter would be some as-yet undiscovered particle or class of particles that would have to be electrically neutral (dark!) and would only interact very weakly if at all beyond the gravitational attraction.

There have been many experiments trying to detect these particles directly.  The usual assumption is that these particles are all around us, and very occasionally they will interact with the nuclei of ordinary matter via some residual, weak mechanism (say higher order corrections to ordinary standard model physics).  The signature would be energy getting dumped into a nucleus without necessarily producing a bunch of charged particles.   So, you need a detector that can discriminate between nuclear recoils and charged particles.  You want a lot of material, to up the rate of any interactions, and yet the detector has to be sensitive enough to see a single event, and you need pure enough material and surroundings that a real signal wouldn't get swamped by background radiation, including that from impurities.  The leading detection approaches these days use sodium iodide scintillators (DAMA), solid blocks of germanium or silicon (CDMS), and liquid xenon (XENON, LUX, PandaX - see here for some useful discussion and links).

I've been blogging long enough now to have seen rumors about dark matter detection come and go.  See here and here.  Now in the last week both LUX and PandaX have reported their latest results, and they have found nothing - no candidate events at all - after their recent experimental runs.  This is in contrast to DAMA, who have been seeing some sort of signal for years that seems to vary with the seasons.  See here for some discussion.  The lack of any detection at all is interesting.  There's always the possibility that whatever dark matter exists really does only interact with ordinary matter via gravity - perhaps all other interactions are somehow suppressed by some symmetry.  Between the lack of dark matter particle detection and the apparent lack of exotica at the LHC so far, there is a lot of head scratching going on....

Saturday, July 16, 2016

Impact factors and academic "moneyball"

For those who don't know the term:  Moneyball is the title of a book and a movie about the 2002 Oakland Athletics baseball team, a team with a payroll in the bottom 10% of major league baseball at the time.   They used a data-intensive, analytics-based strategy called sabermetrics to find "hidden value" and "market inefficiencies", to put together a very competitive team despite their very limited financial resources.   A recent (very fun if you're a baseball fan) book along the same lines is this one.  (It also has a wonderful discussion of confirmation bias!)

A couple of years ago there was a flurry of articles (like this one and the academic paper on which it was based) about whether a similar data-driven approach could be used in scientific academia - to predict success of individuals in research careers, perhaps to put together a better department or institute (a "roster") by getting a competitive edge at identifying likely successful researchers.

The central problems in trying to apply this philosophy to academia are the lack of really good metrics and the timescales involved in research careers.  Baseball is a paradise for people who love statistics.  The rules have been (largely) unchanged for over a hundred years; the seasons are very long (formerly 154 games, now 162), and in any game an everyday player can get multiple opportunities to show their offensive or defensive skills.   With modern tools it is possible to get quantitative information about every single pitched ball and batted ball.  As a result, the baseball stats community has come up with a huge number of quantitative metrics for evaluating performance in different aspects of the game, and they have a gigantic database against which to test their models.  They even have devised metrics to try and normalize out the effects of local environment (baseball park-neutral or adjusted stats).

Fig. 1, top panel, from this article.  x-axis = # of citations.
The mean of the distribution is strongly affected by the outliers.
In scientific research, there are very few metrics (publications; citation count; impact factor of the journals in which articles are published), and the total historical record available on which to base some evaluation of an early career researcher is practically the definition of what a baseball stats person would call "small sample size".   An article in Nature this week highlights the flaws with impact factor as a metric.  I've written before about this (here and here), pointing out that impact factor is a lousy statistic because it's dominated by outliers, and now I finally have a nice graph (fig. 1 in the article; top panel shown here) to illustrate this.  

So, in academia, the tantalizing fact is that there is almost certainly a lot of "hidden value" out there missed by traditional evaluation approaches.  Just relying on pedigree (where did so-and-so get their doctorate?) and high impact publications (person A must be better than person B because person A published a paper as a postdoc in a high impact glossy journal) almost certainly misses some people who could be outstanding researchers.  However, the lack of good metrics, the small sample sizes, the long timescales associated with research, and enormous local environmental influence (it's just easier to do cutting-edge work at Harvard than at Northern Michigan), all mean that it's incredibly hard to come up with a way to find these people via some analytic approach.