Monday, January 16, 2017

What is the difference between science and engineering?

In my colleague Rebecca Richards-Kortum's great talk at Rice's CUWiP meeting this past weekend, she spoke about her undergrad degree in physics at Nebraska, her doctorate in medical physics from MIT, and how she ended up doing bioengineering.  As a former undergrad engineer who went the other direction, I think her story did a good job of illustrating the distinctions between science and engineering, and the common thread of problem-solving that connects them.

In brief, science is about figuring out the ground rules about how the universe works.   Engineering is about taking those rules, and then figuring out how to accomplish some particular task.   Both of these involve puzzle-like problem-solving.  As a physics example on the experimental side, you might want to understand how electrons lose energy to vibrations in a material, but you only have a very limited set of tools at your disposal - say voltage sources, resistors, amplifiers, maybe a laser and a microscope and a spectrometer, etc.  Somehow you have to formulate a strategy using just those tools.  On the theory side, you might want to figure out whether some arrangement of atoms in a crystal results in a lowest-energy electronic state that is magnetic, but you only have some particular set of calculational tools - you can't actually solve the complete problem and instead have to figure out what approximations would be reasonable, keeping the essentials and neglecting the extraneous bits of physics that aren't germane to the question.

Engineering is the same sort of process, but goal-directed toward an application rather than specifically the acquisition of new knowledge.  You are trying to solve a problem, like constructing a machine that functions like a CPAP, but has to be cheap and incredibly reliable, and because of the price constraint you have to use largely off-the-shelf components.  (Here's how it's done.)

People act sometimes like there is a vast gulf between scientists and engineers - like the former don't have common sense or real-world perspective, or like the latter are somehow less mathematical or sophisticated.  Those stereotypes even comes through in pop culture, but the differences are much less stark than that.  Both science and engineering involve creativity and problem-solving under constraints.   Often which one is for you depends on what you find most interesting at a given time - there are plenty of scientists who go into engineering, and engineers can pursue and acquire basic knowledge along the way.  Particularly in the modern, interdisciplinary world, the distinction is less important than ever before.

Friday, January 13, 2017

Brief items

What with the start of the semester and the thick of graduate admissions season, it's been a busy week, so rather than an extensive post, here are some brief items of interest:

  • We are hosting one of the APS Conferences for Undergraduate Women in Physics this weekend.  Welcome, attendees!  It's going to be a good time.
  • This week our colloquium speaker was Jim Kakalios of the University of Minnesota, who gave a very fun talk related to his book The Physics of Superheroes (an updated version of this), as well as a condensed matter seminar regarding his work on charge transport and thermoelectricity in amorphous and nanocrystalline semiconductors.  His efforts at popularizing physics, including condensed matter, are great.  His other books are The Amazing Story of Quantum Mechanics, and the forthcoming The Physics of Everyday Things.  That last one shows how an enormous amount of interesting physics is embedded and subsumed in the routine tasks of modern life - a point I've mentioned before.   
  • Another seminar speaker at Rice this week was John Biggins, who explained the chain fountain (original video here, explanatory video here, relevant paper here).
  • Speaking of videos, here is the talk I gave last April back at the Pittsburgh Quantum Institute's 2016 symposium, and here is the link to all the talks.
  • Speaking of quantum mechanics, here is an article in the NY Review of Books by Steven Weinberg on interpretations of quantum.  While I've seen it criticized online as offering nothing new, I found it to be clearly written and articulated, and that can't always be said for articles about interpretations of quantum mechanics.
  • Speaking of both quantum mechanics interpretations and popular writings about physics, here is John Cramer's review of David Mermin's recent collection of essays, Why Quark Rhymes with Pork:  And other Scientific Diversions (spoiler:  I agree with Cramer that Mermin is wrong on the pronunciation of "quark".)  The review is rather harsh regarding quantum interpretation, though perhaps that isn't surprising given that Cramer has his own view on this.

Sunday, January 08, 2017

Physics is not just high energy and astro/cosmology.

A belated happy new year to my readers.  Back in 2005, nearly every popularizer of physics on the web, television, and bookshelves was either a high energy physicist (mostly theorists) or someone involved in astrophysics/cosmology.  Often these people were presented, either deliberately or through brevity, as representing the whole discipline of physics.  Things have improved somewhat, but the overall situation in the media today is not that different, as exemplified by the headline of this article, and noticed by others (see the fourth paragraph here, at the excellent blog by Ross McKenzie).

For example, consider Edge.org, which has an annual question that they put to "the most complex and sophisticated minds".   This year the question was, what scientific term or concept should be more widely known?  It's a very interesting piece, and I encourage you to read it.  They got responses from 206 contributors (!).   By my estimate, about 31 of those would likely say that they are active practicing physicists, though definitions get tricky for people working on "complexity" and computation.  Again, by my rough count, from that list I see 12-14 high energy theorists (depending on whether you count Yuri Milner, who is really a financier, or Gino Segre, who is an excellent author but no longer an active researcher) including Sabine Hossenfelder, one high energy experimentalist, 10 people working on astrophysics/cosmology, four working on some flavor of quantum mechanics/quantum information (including the blogging Scott Aronson), one on biophysics/complexity, and at most two on condensed matter physics.   Seems to me like representation here is a bit skewed.  

Hopefully we will keep making progress on conveying that high energy/cosmology is not representative of the entire discipline of physics....



Thursday, December 29, 2016

Some optimism at the end of 2016

When the news is filled with bleak items, like:
it's easy to become pessimistic.   Bear in mind that modern communications plus the tendency for bad news to get attention plus the size of the population can really distort perception.  To put that another way, 56 million people die every year (!), but now you are able to hear about far more of them than ever before.  

Let me make a push for optimism, or at least try to put some things in perspective.  There are some reasons to be hopeful.  Specifically, look here, at a site called "Our World in Data", produced at Oxford University.  These folks use actual numbers to point out that this is actually, in many ways, the best time in human history to be alive:
  • The percentage of the world's population living in extreme poverty is at an all-time low (9.6%).
  • The percentage of the population that is literate is at an all-time high (85%), as is the overall global education level.
  • Child mortality is at an all-time low.
  • The percentage of people enjoying at least some political freedom is at an all-time high.
That may not be much comfort to, say, an unemployed coal miner in West Virginia, or an underemployed former factory worker in Missouri, but it's better than the alternative.   We face many challenges, and nothing is going to be easy or simple, but collectively we can do amazing things, like put more computing power in your hand than existed in all of human history before 1950, set up a world-spanning communications network, feed 7B people, detect colliding black holes billions of lightyears away by their ripples in spacetime, etc.  As long as we don't do really stupid things, like make nuclear threats over twitter based on idiots on the internet, we will get through this.   It may not seem like it all the time, but compared to the past we live in an age of wonders.

Tuesday, December 20, 2016

Mapping current at the nanoscale - part 2 - magnetic fields!

A few weeks ago I posted about one approach to mapping out where current flows at the nanoscale, scanning gate microscopy.   I had made an analogy between current flow in some system and traffic flow in a complicated city map.  Scanning gate microscopy would be analogous recording the flow of traffic in/out of a city as a function of where you chose to put construction barrels and lane closures.  If sampled finely enough, this would give you a sense of where in the city most of the traffic tends to flow.

Of course, that's not how utilities like Google Maps figure out traffic flow maps or road closures.  Instead, applications like that track the GPS signals of cell phones carried in the vehicles.  Is there a current-mapping analogy here as well?  Yes.  There is some "signal" produced by the flow of current, if only you can have a sufficiently sensitive detector to find it.  That is the magnetic field.  Flowing current density \(\mathbf{J}\) produces a local magnetic field \(\mathbf{B}\), thanks to Ampere's law, \(\nabla \times \mathbf{B} = \mu_{0} \mathbf{J}\).
Scanning SQUID microscope image of x-current density 
in a GaSb/InAs structure, showing that the current is 
carried by the edges.  Scale bar is 20 microns.  Image 



Fortunately, there now exist several different technologies for performing very local mapping of magnetic fields, and therefore the underlying pattern of flowing current in some material or device.  One older, established approach is scanning Hall microscopy, where a small piece of semiconductor is placed on a scanning tip, and the Hall effect in that semiconductor is used to sense local \(B\) field.

Scanning NV center microscopy to see magnetic fields,
Scale bars are 400 nm.
Considerably more sensitive is the scanning SQUID microscope, where a tiny superconducting loop is placed on the end of a scanning tip, and used to detect incredibly small magnetic fields.  Shown in the figure, it is possible to see when current is carried by the edges of a structure rather than by the bulk of the material, for example.

A very recently developed method is to use the exquisite magnetic field sensitive optical properties of particular defects in diamond, NV centers.  The second figure (from here) shows examples of the kinds of images that are possible with this approach, looking at the magnetic pattern of data on a hard drive, or magnetic flux trapped in a superconductor.  While I have not seen this technique applied directly to current mapping at the nanoscale, it certainly has the needed magnetic field sensitivity.  Bottom line:  It is possible to "look" at the current distribution in small structures at very small scales by measuring magnetic fields.

Saturday, December 17, 2016

Recurring themes in (condensed matter/nano) physics: Exponential decay laws

It's been a little while (ok, 1.6 years) since I made a few posts about recurring motifs that crop up in physics, particularly in condensed matter and at the nanoscale.  Often the reason certain mathematical relationships crop up repeatedly in physics is that they are, deep down, based on underlying assumptions that are very simple.  One example common in all of physics is the idea of exponential decay, that some physical property or parameter often ends up having a time dependence proportional to \(\exp(-t/\tau)\), where \(\tau\) is some characteristic timescale.
Buffalo Bayou cistern.  (photo by Katya Horner).

Why is this time dependence so common?  Let's take a particular example.  Suppose we are in the remarkable cistern, shown here, that used to store water for the city of Houston.   If you go on a tour there (I highly recommend it - it's very impressive.), you will observe that it has remarkable acoustic properties.  If you yell or clap, the echo gradually dies out by (approximately) exponential decay, fading to undetectable levels after about 18 seconds (!).  The cistern is about 100 m across, and the speed of sound is around 340 m/s, meaning that in 18 seconds the sound you made has bounced off the walls around 61 time.  Each time the sound bounces off a wall, it loses some percentage of its intensity (stored acoustic energy).

That idea, that the decrease in some quantity is a fixed fraction of the current size of that quantity, is the key to the exponential decay, in the limit that you consider the change in the quantity from instant to instant (rather than taking place via discrete events).    Note that this is also basically the same math that is behind compound interest, though that involves exponential growth.


Saturday, December 10, 2016

Bismuth superconducts, and that's weird

Many elemental metals become superconductors at sufficiently low temperatures, but not all.  Ironically, some of the normal metal elements with the best electrical conductivity (gold, silver, copper) do not appear to do so.  Conventional superconductivity was explained by Bardeen, Cooper, and Schrieffer in 1957.  Oversimplifying, the idea is that electrons can interact with lattice vibrations (phonons), in such a way that there is a slight attractive interaction between the electrons.  Imagine a billiard ball rolling on a foam mattress - the ball leaves trailing behind it a deformation of the mattress that takes some finite time to rebound, and another nearby ball is "attracted" to the deformation left behind.  This slight attraction is enough to cause pairing between charge carriers in the metal, and those pairs can then "condense" into a macroscopic quantum state with the superconducting properties we know.  The coinage metals apparently have comparatively weak electron-phonon coupling, and can't quite get enough attractive interaction to go superconducting.

Another way you could fail to get conventional BCS superconductivity would be just to have too few charge carriers!  In my ball-on-mattress analogy, if the rolling balls are very dilute, then pair formation doesn't really happen, because by the time the next ball rolls by where a previous ball had passed, the deformation is long since healed.  This is one reason why superconductivity usually doesn't happen in doped semiconductors.

Superconductivity with really dilute carriers is weird, and that's why the result published recently here by researchers at the Tata Institute is exciting.  They were working bismuth, which is a semimetal in its usual crystal structure, meaning that it has both electrons and holes running around (see here for technical detail), and has a very low concentration of charge carriers, something like 1017/cm3, meaning that the typical distance between carriers is on the order of 30 nm.  That's very far, so conventional BCS superconductivity isn't likely to work here.  However, at about 500 microKelvin (!), the experimenters see (via magnetic susceptibility and the Meissner effect) that single crystals of Bi go superconducting.   Very neat.  

They achieve these temperatures through a combination of a dilution refrigerator (possible because of the physics discussed here) and nuclear demagnetization cooling of copper, which is attached to a silver heatlink that contains the Bi crystals.   This is old-school ultralow temperature physics, where they end up with several kg of copper getting as low as 100 microKelvin.    Sure, this particular result is very far from any practical application, but the point is that this work shows that there likely is some other pairing mechanism that can give superconductivity with very dilute carriers, and that could be important down the line.