Skip to main content
Chemistry LibreTexts

5.2: CHEM ATLAS_2

  • Page ID
    408745
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    How This Connects: Unit 2, Lectures 10-20

    Screen Shot 2022-09-04 at 5.07.04 PM.png

    Lecture 10: Shapes of Molecules

    Summary

    This lecture focused on the Valence Shell Electron Pair Repulsion (VSEPR) model. This model allows us to predict the shapes of molecules—a precursor to more fully understanding their properties. VSEPR is based around electron repulsion, and gives the most stable structures as those that minimize these repulsions. To find the VSEPR representation of a molecule, follow these steps:

    1. Write Lewis structure 
    2. Classify each electron pair as bonding or nonbonding 
    3. Maximize separation between domains 
    4. Give more space to non-bonding domains and to bonding domains with higher bond order

    A bonding pair (BP) of electrons is any two electrons that take part in a bond. A double bond is made up of 2 BPs, and a triple bond is made up of 3 BPs. A lone pair (LP) of electrons is any two electrons that are not part of a bond. The strength of repulsion between electron pairs, in ascending order, goes as follows: BP-BP, BP-LP, LP-LP. Also, the repulsion of a single BP is less than that of 2 BPs or 3 BPs.

    Screen Shot 2022-09-04 at 5.08.22 PM.png

    VSEPR Geometries by Boundless Chemistry. License: CC BY-SA. This content is excluded from our Creative Commons license. For information, see https://ocw.mit.edu/fairuse.

    Why this matters

    The average human can discriminate between 4,000 and 10,000 different odors. (Before we get excited about how awesome that is, consider that a dog can smell between 10 and 100 thousand times better than a human!). But what is smell, from a chemistry perspective? Taste and smell are related, and thinking about what’s behind them goes back to the ancient Greeks. None other than our friend Democritus (atomism!) speculated that the taste of a substance was due to the shape of its component particles. He thought that acidic particles would be sharp, since they felt like they attacked your mouth, while sweet stuff was made of nice cuddly soft shapes. While his reasoning was quite simple, the idea that taste and smell were governed by shape was amazingly prescient.

    Screen Shot 2022-09-04 at 5.09.59 PM.png

    Fast forward 2500 years and we now know that the ability to taste and smell works through “receptor sites” in the tongue and nose. Here’s the tongue broken down, showing the receptor site on the right, and below that is a larger view of receptor site with various parts labeled. Note that the receptor site is also what we call a taste bud. Signals that get sent from this receptor site to the brain through the nerve fibers determine what you taste, and the signal is deeply dependent not just on the composition of the molecule itself, but also on its shape. 

    Screen Shot 2022-09-04 at 5.10.57 PM.png

    Take a look at the two molecules below, glucose which tastes sweet, and quinine which tastes bitter. Their chemistries are different and the way the chemistry of the molecule bonds to the sensory cell is crucial, but the way the shape fits into the pore itself and impacts the orientation of the molecule on the sensory cell can be equally important. If we didn’t know the shapes of these molecules, we’d write them out as simple 2D Lewis structures, but it’s those 3D shapes that you see in the figure that distinguish their tastes!

    Screen Shot 2022-09-04 at 5.11.44 PM.png

    In the case of the carvone molecule, or \(\mathrm{C}_{10}\mathrm{H}_{14}\mathrm{O}\), we have an even more striking example of the role of shape on smell. This molecule forms two mirror images, denoted \(\mathrm{R}\) and \(\mathrm{S}\) in the figure below. The \(\mathrm{R}\) form smells like spearmint while the \(\mathrm{S}\) form smells like caraway seeds. Same exact chemistry, different shape, different smell. Many molecules can take on two forms like this with mirror symmetry but that aren’t the same, and such pairs are called enantiomers. Actually, beyond molecules, enantiomers can be anything. Your hands, for example, are enantiomers. If you hold them up facing one another you’ll see they have mirror symmetry, and if you try to rotate one around the other you’ll see you can’t superimpose them. For this reason, the property of having mirror symmetry but not being superimposable is called handedness, or chirality. The fact that the two enantiomers are perceived as smelling different shows that those receptor cells must contain chiral groups, allowing them to respond more strongly to one enantiomer than to the other. Thus, both for the molecule being smelled and for the molecules used to do the smelling, molecular shape holds the key!

    Screen Shot 2022-09-04 at 5.13.56 PM.png

    Why this employs

    It’s about time we talk about enzymes. These are molecules, often proteins, folded into a specific (and complicated!) shape, that speed up chemical reactions in your body. Enzymes are absolutely essential for so many crucial functions of our body, including respiration, digestion, muscle and nerve function, and many more. In digestion, the role of an enzyme molecule can speed up a needed reaction by a factor of a million times! That lets you digest your dinner in hours rather than, well, a thousand years.

    Screen Shot 2022-09-04 at 5.14.48 PM.png

    Enzymes work by binding to molecules in a specific way, and you may have already guessed that shape is crucial. In fact, way back in 1894 it was the Nobel Laureate Emil Fischer who came up with the ”lock and key” model to explain how enzymes work. The idea in this model is pretty much how it sounds: an enzyme’s active site is a specific shape, and only the substrate will fit into it, like a lock and key. Here’s a cartoon to illustrate how the enzyme perfectly fits, because of its shape, into a substrate, giving the substrate the power to make reactions happen faster (also called catalysis). The model has been updated over the past 100 years, for example to include the fact that the substrate and enzyme itself are dynamic and can change shape when they interact, or that the effects of the surrounding solvent are important, but the key feature that shape is crucial remains the foundational principle upon which enzymes operate.

    As you may have noticed by just going to the grocery store, there’s a big market for new foods with new enzymes. There are so many enzymes involved in digestion, like lipases that help digest fats in the gut. amylase helps change starches into sugars. Maltase breaks the sugar maltose into glucose (it’s in potatoes, pasta, and beer, for example). Trypsin breaks proteins down into amino acids. Lactase breaks down lactose, the sugar in milk, into glucose and galactose, and on and on. Which brings me to the job market: here, specifically the food industry. The synthesis and use of new enzymes in the preparation of food have seen tremendous growth not only in digestion but also in the taste and texture of the food, as well as possible economic benefits. There are many jobs related to food science (check out this article in the NatureJobs pages, which is a cool site in case you haven’t seen it: https : //www.nature.com/naturejobs/science/articles/10.1038/nj7422 - 149a?WT.ecid = NATUREjobs - 20121106). But jobs related to enzymes having to do with food also relate to the future of humanity itself. How will we feed our population in the future? Will all protein need to be plant-based? Or will we eat insect-meat? Or meat grown in a lab? Will food be 3D-printed, and will it be served by robots? All of these topics have recently received vast amounts of attention, research funding, led to start-ups, and a lot of interest from larger companies, and all of that spells jobs. Specifically, jobs involving knowledge about enzymes, which in the end only function because of their shape (VSEPR!).

    Example Problems

    1. Draw the Lewis structure of \(\mathrm{H}_2 \mathrm{~N}-\mathrm{SH}\) and determine the VSEPR geometry around (a) the nitrogen atom, and (b) the sulfur atom.

    Answer

    Screen Shot 2022-09-04 at 5.18.50 PM.png

    a) Geometry around the nitrogen atom: trigonal pyramidal

    b) Geometry around the sulfur atom: bent

    2. Determine the VSPER geometry for each of the following and predict whether each will be polar or nonpolar

    a) \(B F_3\)

    Answer

    \(B F_3\): Difference in N-F electronegativity: polar bonds. Trigonal planar: dipoles cancel! Nonpolar

    Screen Shot 2022-09-04 at 5.20.52 PM.png

    b) \(F O F\)

    Answer

    \(FOF\): Difference in \(\mathrm{O}-\mathrm{F}\) electronegativity: dipole moment: polar

    Screen Shot 2022-09-04 at 5.22.00 PM.png

    c) \(\mathrm{CCl}_4\)

    Answer

    \(\mathrm{CCl}_4\): Difference in \(\mathrm{C}-\mathrm{Cl}\) electronegativity: dipole moments; tetrahedral structure: cancel out, so nonpolar!

    Screen Shot 2022-09-04 at 5.23.00 PM.png

    Lecture 11: Molecular Orbitals

    Summary

    Molecular orbital theory is a tool used to predict the shape and behavior of electrons that are shared between atoms. Two or more atomic orbitals are added together to make a linear combination of atomic orbitals—the LCAO method—allowing for quick characterization of the kinds of bonds formed between two atoms.

    Screen Shot 2022-09-04 at 5.24.00 PM.png

    When atomic orbitals are added, they can either constructively interfere, forming a bond, or they can destructively interfere, forming an antibonding state. In a molecule, the bonding state is always lower in energy than the corresponding atomic orbital, while the antibonding state is always higher. Molecular orbital diagrams, or \(\mathrm{MO}\) diagrams, are a convenient visualization tool to see how electrons are distributed between two atoms: hydrogen is shown to the right as an example.

    The two hydrogen \(1 \mathrm{~s}\) orbitals combine to form a \(\sigma_{1 s}\) bond, which is lower in energy and therefore more stable. If we had started with helium instead, there would have been four electrons to distribute: two would form a \(\sigma_{1 s}\) bond, but the other two would go in the \(\sigma_{1 s} *\) antibonding state, effectively cancelling out the bond.

    After placing electrons in the \(\mathrm{MO}\) diagram, bond order can be calculated:

    \(B O=\dfrac{1}{2} \text { ( of } \mathrm{e}^{\prime} \text { in bonding orbitals }-\mathrm{e}^{\prime} \text { in anti-bonding orbitals) }\)

    Stronger bonds have higher bond orders, and the bond order must be \(>0\) for a bond to exist at all. For our example above, a hydrogen dimer has \(\mathrm{BO}=1\), but a helium dimer has \(\mathrm{BO}=0\). This explains why hydrogen gas exists as \(\mathrm{H}_2\), but helium gas consists of individual atoms!

    Screen Shot 2022-09-04 at 5.27.58 PM.png

    If an atom has p-orbitals as well as s-orbitals, additional bonds called bonds can form. The three p-orbital domains yield three kinds of bonds, one \(\sigma\) and two : \(\sigma_{n p z}, \pi_{n p x}\), and \(\pi_{n p y}\), where \(\mathrm{n}\) refers to the particular energy level being considered. Note that the \(\sigma\) bond forms for both \(\mathrm{s}\) orbital overlap as well as one of the \(\mathrm{p}\) orbital overlaps: the term \(\sigma\) bond means that the bond has cylindrical symmetry around the bond axis, something that is not the case for the bonds. Anti-bonding \(\sigma^*\) and \(\pi^*\) states also form from \(\mathrm{p}\)-orbital \(\mathrm{MO}\)s. A generic \(\mathrm{MO}\) diagram for \(\mathrm{p}\)-orbitals is shown to the left.

    \(\mathrm{MO}\)s can also be formed from heterogenous dimers: the same rules apply. Since each of the atoms has distinct energy levels associated with its \(\mathrm{s}\)- and \(\mathrm{p}\)-orbitals, the \(\mathrm{MO}\) diagram formed using two different atoms is usually skewed, and the bonds that result are polar covalent bonds. The atom that is more electronegative is lower in energy. It can be helpful to count the number of electrons in the initial atomic orbitals and make sure that they are all used when filling up the \(\mathrm{MO}\)s. Remember that if a dimer is positively charged, it has lost an electron, and if it is negatively charged, it has gained an electron: these have to be accounted for as well! If one of the atoms has more electrons than the other, like a bond between \(\mathrm{H}\) and \(\mathrm{Cl}\), the excess electrons form nonbonding pairs - just like the lone pairs in the Lewis diagrams we drew before. If all of the electrons in the \(\mathrm{MO}\) diagram are paired, then the dimer is diamagnetic, but if there are any electrons left unpaired, it is paramagnetic. For low molecular weight dimers, the \(2 \mathrm{~s}\) and \(2 \mathrm{p}\) atomic orbitals are very close in energy, and they can interact with each other. For dimers with lower \(\mathrm{MW}\) than \(\mathrm{O}_2\), the \(\pi_{2 p x, y}\) and \(\sigma_{2 p z}\) orbitals switch, and the \(\pi_{2 p x, y}\) states are filled up first.

    Why this matters

    There are two ways to separate pasta from the boiling water once it’s cooked. One way is to pour the mixture through a colander, or a filter, to retain pasta on one side while letting the boiling water pass through. Another is to leave it on the stove and let all of the water boil off. Each of these separation techniques leads to the same outcome, but as you can imagine they are very different. I’m not talking about how the pasta would taste (let’s ignore that part), but rather how much energy it takes to carry out this separation task. Pouring through a filter is easy, fast, and efficient, while leaving the pot on the stove would take longer and require a lot more energy.

    Screen Shot 2022-09-04 at 5.33.15 PM.png

    Now shrink down from pasta to nano-pasta. In other words, the size scale of molecules. We separate molecules from one another all the time for a wide variety of industrial processes. In fact, if we look at the U.S. energy consumption, about 1/3 of it goes into what is vaguely labeled “Industry.” But did you know that 40 percent of this energy for industry serves just one process, separating molecules? The reason that number is so large is that we only use the slow and inefficient thermal approach, not the much more efficient filter approach, to perform the separations. That’s a whopping 12 percent of all energy used in the U.S. that goes into boiling one chemical off of another.

    So why is it that we don’t just pour these chemicals through a nano-filter, just like we do with pasta and colanders? If we did switch from thermal separation to a filter, we could save up to 90 percent of that energy! The reason we don’t, is the filter itself. Current filters aren’t yet up to standard. On the one hand, we’ve got filters made from polymers (materials we’ll learn about later in the semester) that can separate the tiniest of molecules very well, but they’re so delicate that they can’t be used in the harsh chemical and thermal environments of most industrial processes. On the other hand, we’ve got filters made from ceramics (materials with strong ionic and/or covalent bonds) that are super resilient and can handle the conditions, but they can’t get down to the sizes of the small molecules that need to be separated.

    So it is that we come to our molecular orbitals, orbital filling, and the specific example of \(\mathrm{N}_2\) vs. \(\mathrm{O}_2\) covered in class. Because one of the big separations that we need to do on a massive scale involves exactly these two molecules. \(\mathrm{O}_2\) is plentiful in the air (unless you go into a closed room, light a candle, and wait 12 hours as we learned in lecture 2!), but for many applications we need \(\mathrm{O}_2\) in much higher concentrations than its naturally occurring 21 percent in air. Take combustion as an example: the 78 percent \(\mathrm{N}_2\) in the air has a negative impact on combustion processes since the nitrogen molecules get heated up during the reaction to very high temperatures, which is not only inefficient, but breaks down into toxic nitrogen oxide (\(\mathrm{NOx}\)) gasses. Increasing the amount of oxygen and decreasing the amount of nitrogen leads to much higher combustion efficiencies, lower harmful emissions, and higher processing temperatures. Now, you may be thinking that I'm only talking about fossil fuel processing, and yes that is certainly a prime example of our use of combustion as a society, but getting purified \(\mathrm{O} 2\) molecules reaches far beyond, to applications in the medical industry to sewage treatment to metal manufacturing, to name only a few.

    The separation of \(\mathrm{O}_2\) from \(\mathrm{N}_2\) is in so much demand globally that it is performed at a quantity of 31 billion kilograms per year. Because the separation is done inefficiently using thermal processes, in this case going cryogenic, which is cooling instead of boiling but for the same purpose, 47 teraBtu (British thermal units, or the amount of thermal energy needed to raise one pound of water by one degree Fahrenheit) of energy are used per year just to do the separation! Just to be clear, that's \(47,000,000,000,000\) Btu of energy, or if you prefer \(13,774,340,298,094\) Watt-hours. A typical household in the U.S. uses about 900,000 Watt-hours of energy per moth, just for reference.

    How do we save up to 90 percent of the massive energy used to separate \(\mathrm{O}_2\) from \(\mathrm{N}_2\) by switching from thermal-based separation to a filter-based approach? The answer lies in those molecular orbitals! They tell us about the bonding of each molecule and the interactions of the molecules with yet-to-be-invented filter materials that could combine the best of both worlds from polymer to ceramic materials. The \(\mathrm{MO}\)s of \(\mathrm{N}_2\) and \(\mathrm{O}_2\) also tell us that these molecules respond differently to externally applied magnetic fields, which may in turn be useful in boosting the separation efficiency. There is a tremendous opportunity for new filters that take these two most abundant molecules in the air and put them into separate compartments, but it all has to start with knowing how the electrons in the molecules behave. And that, of course, we get from molecular orbital theory.

    Why this employs

    The problem with fusion energy is that it’s uncontrolled. Way back in the 1950’s, Disney was making feel-good movies (grab some popcorn and check out, “Our Friend the Atom”), about how energy was very soon going to be, "too cheap to meter." 70 years later, why is this not the case? Fusion is so attractive on so many levels: unlike fission, which is the stuff of current nuclear reactors, fusion has no radioactive waste or byproducts, with the only outcome of a fusion reaction being ridiculous amounts of energy and helium. Fusion is the engine of the stars, so you know this energy is serious. Let's break it down: remember the battery vs. gasoline comparison I made in Lecture 8? Roughly, the best Li-ion batteries today can store about \(1 \mathrm{MJ}\) of energy per \(\mathrm{kg}\). Gasoline, on the other hand, can store \(45 \mathrm{MJ}\) per kg. But let's keep going. The explosive TNT stores around \(4160 \mathrm{MJ} / \mathrm{kg}\). Uranium when used in nuclear fission stores a whopping \(3,456,000 \mathrm{MJ}\) \(/ \mathrm{kg}\) ! This incredible energy density is a strong argument for nuclear energy generation and is often invoked. But when we move to fusion, when we move to the stuff of the stars, all of these numbers feel tiny. The energy density of the fuel for fusion, which is a combination of tritium and deuterium, is an incredible \(576,000,000\) \(\mathrm{MJ} / \mathrm{kg}\). And that fuel is highly abundant and cheap. That's why the dream of fusion is still alive even after 70 years of trying, and in fact today there is a huge resurgence in fusion energy. For more information you don't need to go very far, check out the PSFC (Plasma Science and Fusion Center) right here at MIT. Or the new MIT spin-out called Commonwealth Fusion Systems.

    If nuclear fusion is to become a reality (in just 10 years according to some, but we've also been hearing that since the Disney movie so we need to approach with careful optimism), one of the single most important ingredients to making it work will be magnets. Lots and lots of magnets and very powerful ones. That's because one of the most likely ways to get fusion to work is to contain the massive energy released (which, by the way, gets to temperatures up to 100 million degrees), is to confine the energy using magnetic fields. Now, in fusion reactor designs the magnetic fields are often generated with superconducting coils, so it's different from our unpaired electrons in the \(\mathrm{MO}\) diagram of \(\mathrm{O}_2\). But the general idea that a material can be responsive to an external magnetic field comes from its electron filling, and the more electrons that are unpaired like in the \(\mathrm{O}_2\) molecule, the more responsive it can be.

    So what are the jobs related to developing new magnets? You could go to work at a nuclear fusion start-up like Commonwealth Fusion, or to the large government-led fusion operations like the ITER program in France. But so many other industries need stronger, cheaper, lighter magnets that the job market extends far beyond just fusion. You could look for jobs in companies that manufacture magnets (there are many), or companies pursuing new ideas for recycling magnets (like Urban Mining Company), or as a scientist at a U.S. lab pushing the frontiers of magnets (like Florida's National High Magnetic Field Laboratory, or the one at Los Alamos), or at companies trying to make magnets that don't rely on rare-earth elements (like Toyota for example, among many).

    Extra practice

    1. Consider \(O_1 512\) and \(O_2^{+}\)

    a) Draw the \(\mathrm{MO}\) diagram of each molecule.

    b) Find the bond order of each.

    c) Label each one as paramagnetic or diamagnetic.

    Answer

    Screen Shot 2022-09-04 at 5.40.56 PM.png

    \(\mathrm{O}_2\) is paramagnetic because it has unpaired electrons.
    \(\mathrm{O}_2^{+}\) is also paramagnetic, because it has an unpaired electron. Its bond order is \(2.5\)

    Screen Shot 2022-09-04 at 5.42.35 PM.png

    2. Draw the \(\mathrm{MO}\) diagram of \(\mathrm{HCl}\).

    Answer

    Screen Shot 2022-09-04 at 5.43.21 PM.png

    Lecture 12: Hybridization in Molecular Orbitals

    Summary

    If multiple atomic orbitals within an atom have similar energy levels, they can hybridize, combining to form equal orbitals that have a lower average energy. Consider methane, \(\mathrm{CH} 4\), as an example: the carbon atom has two \(2\mathrm{s}\) electrons and two \(2\mathrm{p}\) electrons. The \(2\mathrm{s}\) and \(2\mathrm{p}\) states hybridize, yielding four equal-energy, unpaired electrons that are ready to bond with hydrogen atoms. As hybridization occurs, carbon's four electrons are redistributed so as to be maximally spaced apart, lowering the energy of the system and yielding the most stable state: a tetrahedral methane molecule. This kind of hybridization is called \(\mathrm{sp}^3\), because one \(\mathrm{s}\)-orbital energy level combined with three \(\mathrm{p}\)-orbital energy levels.

    Screen Shot 2022-09-04 at 5.45.41 PM.png

    For the case of ethane, \(\mathrm{C}_2\mathrm{H}_4\), the hybridization process is slightly different, as shown here (this time as an energy level diagram). Each carbon forms only three bonds: two with hydrogen atoms, and one with the other carbon. The \(\mathrm{s}\)-orbital energy level combines with two \(\mathrm{p}\)-orbital energy levels to form three equal \(mathrm{sp}^2\) bonds; the remaining \(2\mathrm{p}\) electrons form a higher-energy \(\pi\) bond between the two carbon atoms and yield a double bond between the carbon atoms. 

    Screen Shot 2022-09-04 at 5.48.43 PM.png

    The next logical molecule logical molecule to consider is acetylene, \(\mathrm{C}_2\mathrm{H}_2\). In this case, each carbon is only bonded to one hydrogen, so only one \(2 \mathrm{p}\) energy level hybridizes with the \(2\mathrm{s}\) energy level to form a \(\mathrm{sp}\) hybridized bond. The remaining \(2\mathrm{p}\) orbitals form two bonds, yielding a triple bond between the carbon atoms.

    Screen Shot 2022-09-04 at 5.50.33 PM.png

    In summary, hybridization occurs to lower the overall energy of the system: atomic orbitals combine with each other to form mixed states with a lower average energy. Knowing the hybridization of the molecule is equivalent to knowing the molecular shape: VSEPR gives the geometric name corresponding to the specific combination of bond angles that minimize the overall energy of the system.

    Why this matters

    Screen Shot 2022-09-04 at 5.52.50 PM.png

    More than 3 billion people on this planet live in water-stressed regions. 1.8 billion people drink fecally contaminated water. 600 million people boil their water to clean it. In places where water scarcity is a serious issue, by some estimates 70% of all disease and 30% of all death can be attributed to the lack of water or water quality. Fresh water makes up just 2.5% of all water on earth, but more than 2/3 of that is tied up in glaciers. This means that only 1% of all water on the planet is drinkable, and the balance of this precious resource is way offscale.

    Given the level of global crisis that access to freshwater has become, it makes a lot of sense to turn our attention to the other 97% of water on the planet, saltwater in the oceans. The problem, of course, being that it’s not drinkable (or useful in most agriculture), unless the salt is removed. The good news is that desalination is growing in terms of use and installed capacity, but the bad news is that it still costs far too much to become a ubiquitous substitute for groundwater.

    Screen Shot 2022-09-04 at 5.53.20 PM.png

    How can we work on lowering the cost and increasing efficiency in desalination? First of all, we need to know what the current cost breakdown is, and as you can see in the pie chart in this image, the major cost of desalination is in the energy it takes to pump water through the system. By “system” I mean a pretreatment facility where seawater is run through sand to filter out large impurities (shells, rocks, seaweed, etc.), followed by the actual “desal” part of desalination, where the water is run through a set of membranes (40,000 of them in the plant shown in the picture) that remove the salt and allow freshwater to pass. Pumping the saltwater through these membranes requires by far the biggest part of the plant’s energy consumption, so improving the membranes and making them more energy efficient is crucial. In fact, it’s not just that the membranes used today are energy inefficient, but it’s also that they get fouled up (bacteria and other organic matter grow on their pores) and are extremely delicate so can’t be cleaned very well. This means for most of the time the plant is paying higher energy costs than it needs to (by a factor of 2 or 3 sometimes!) because it has to pump water through membranes that are filthy and blocked and can’t be cleaned. Now that’s what I call a materials design opportunity! Make a better membrane for the salt removal step of desalination, and make the process cheaper. That’s how we come to our Why This Matters and connection to today’s lecture.

    We learned about the sigma and pi bonds that can form when the \(\mathrm{AO}\)s of carbon atoms hybridize. 

    Screen Shot 2022-09-04 at 5.54.19 PM.png

    In the examples in class we had carbon bonded to hydrogen and sometimes also to itself. If we only have \(\mathrm{sp2}\) hybridization and we only have \(\mathrm{C}\) atoms and no \(\mathrm{H}\) atoms, then we arrive at a very, very cool material: graphene. It's bonded together in a honeycomb lattice sheet (a 2D sheet) with \(\mathrm{sp}^2\) bonds, and those extra \(\mathrm{p}\) electrons form pi bonds across the plane to give it a huge stability boost. It's a very cool material and its isolation from graphite (which is just stacks of graphene) won the Nobel Prize in 2010. (The scientists who discovered graphene were able to separate it from a chunk of graphite using only run-of-the-mill tape! So remember, sometimes all you need for a Nobel Prize is pencil lead, tape, and determination). I can't possibly go into all the details for why graphene is cool, but if you search online you'll see right away. Another consequence of graphene is that it launched a whole field of " \(2 \mathrm{D}\) materials" where researchers have realized that so many other materials can be made into these sheets that are only one or a few atoms thick. It's now even possible to make completely new stacks of 3D materials by mixing and matching \(2 \mathrm{D}\) sheets (check out for example the paper by Geim and Grigorieva, "Van der Waals heterostructures," Nature volume 499, pages 419-425 (25 July 2013). By the way, Geim was one of the two scientists who won the Nobel prize for graphene's discovery. The other is Novoselov, but Geim is a bit special as he is the only person to ever have won both the Nobel and the IgNobel prizes, the latter for levitating frogs. But I digress.

    The point is that graphene may just be the ultimate membrane. It’s only 1 atom thick, so in terms of viscous loss it’s hard to beat. Plus, it’s much more resilient than today’s polymer membranes so it can be cleaned easily. It’s no wonder graphene’s been considered as a potential for desalination since 2012, even by people you might already know, for example, “Water Desalination across Nanoporous Graphene,” by Cohen-Tanugi and Grossman, Nanoletters volume 12, pages 3602- 3608 (2012). Because of its massive potential in water desalination and also water treatement and purification in general, there are many research groups and even already a number of companies working towards commercializing graphene-based membranes (Via Separations, for example). This is all extremely exciting, but it’s also only possible because of the hybridization that occurs in carbon atoms, which combined with those pi bonds allows them to take the form of graphene.

    Why this employs

    This is a tough one since hybridization in chemistry is what enables so many molecules to exist at all, and this impacts nearly all job sectors, not to mention life itself. But since we covered graphene in Why This Matters let’s stick to graphene here in the Employment category too. There are companies directly manufacturing graphene and investing a lot of $$ into making it cheap, at large scales, and at high quality (meaning very few defects if possible), or with tailored functional chemistries. There are also different versions of graphene, from pure graphene to graphene-oxide to reduced graphene oxide and so on. Graphene Supermarket, ACS Material, or Graphenea are all examples of companies making graphene products and all of them have job openings for students who know about hybridization.

    But graphene production has been embraced in a big way by large companies, too. Toshiba has invested over $50M in manufacturing plants for new carbon materials, from graphene to other sp carbon nanostructures like carbon nanotubes and fullerenes. Other large chemical producers have joined the club, like Cemtrex, Mitsubishi Chemical, Cabot, or Aixtron, to name only a few examples. And these are all just companies thinking about making graphene, but then if we expand to ones that are using it to improve their technology, the list goes on and on. This is especially true in battery companies, both big and small, where the use of sp2 carbon nanostructures like graphene has tremendous potential. Applications involving catalysis and electronics are also fantastic candidates for the use of graphene.

    Extra practice

    1. Look at a single carbon in the portion of a diamond lattice below. Convince yourself that the structure could keep growing outwards infinitely in all directions.

    Screen Shot 2022-09-04 at 6.03.34 PM.png

    a) What is the formal charge on any one carbon atom?

    Answer

    0, it’s forming 4 bonds

    b) What is the hybridization of any one bond? How do you know?

    Answer

    \(\mathrm{C}\) is forming 4 sigma bonds, so it must have 4 equivalent hybridized orbitals: it must have used all three p-orbitals and its \(\mathrm{s}\)-orbital: \(\mathrm{sp3}\)

    c) Is this a resonant structure? How do you know?

    Answer

    No - the electrons are all forming sigma bonds, so there’s no other configuration of electrons allowed

    2. Look at a single carbon atom in the portion of a buckminsterfullerene molecule below (yes, it’s the logo for our class!)

    Screen Shot 2022-09-04 at 6.04.31 PM.png

    a) The formal charge on any one \(\mathrm{C}\) atom is 0. How many sigma and pi bonds must each carbon be forming?

    Answer

    There has to be a double bond associated with each carbon for it to be forming 4 bonds and have 0 formal charge, so each carbon is forming 3 sigma bonds and 1 pi bond

    b) Is VSEPR satisfied? Is it almost satisfied?

    Answer

    Each sigma bond has to be \(\mathrm{sp2}\) hybridized, since three bonds are being formed total and they all lie in a plane

    c) What is the hybridization of any one sigma bond? How do you know?

    d) Is this a resonant structure? How do you know?

    Answer to c and d

    Yes - There are different configurations in which the pi-bonds can be made over the whole fullerene molecule, so there are resonances. Also, we infer that the \(\mathrm{p}\)-orbitals forming the pi-bonds can either add or subtract giving delocalized electron molecular orbitals - pointing to resonance

    Lecture 13: Intermolecular interactions

    Summary

    This lecture focused on intermolecular forces (IMFs), which are interactions between molecules weaker than ionic or covalent bonds, but that on a larger scale take on an enormous role in giving materials their properties. First, we defined a dipole: a pair of charges, one positive and one negative, separated by a distance. We have already seen dipoles in this class—a covalent bond that involves a difference in electronegativities, for example if one atom is electropositive and the other electronegative, forms a dipole. The \(\mathrm{H} - \mathrm{Cl}\) molecule is a simple example shown here, where the arrow in the diagram points towards the more electronegative chlorine, and the \(\delta^+ / \delta^-\) indicate an excess of positive/negative charge. The arrow is the direction of the dipole. It is possible for a molecule with polar covalent bonds to have no net dipole, as shown for \(\mathrm{CO}_2\) below. This is because its two dipoles cancel. Water has a net dipole only in the y-direction. 

    Screen Shot 2022-09-04 at 6.11.56 PM.png

    The presence of a net dipole means that the dipoles of the molecule will feel attraction to opposite charges. This other charge could be an ion, or another dipole. The former is called an ion-dipole interaction, and the latter a dipole-dipole interaction. These are attractions between molecules rather than bonds within them.

    Screen Shot 2022-09-04 at 6.12.44 PM.png

    It is still possible for molecules with no net dipole to participate in intermolecular bonding, because even nonpolar molecules can experience temporary dipoles due to fluctuations in the electron cloud. These temporary dipoles can happen due to interactions with surrounding charges, in which case they are called induced dipoles. These can interact with ions or permanent dipoles or other induced dipoles. These fluctuations in the electron cloud can also be a result of temperature. When these temporary dipoles interact with each other, the attractive forces are called London dispersion forces (LDF). It is easier to temporarily deform the electron clouds of some molecules compared to others. They have differences in polarizability. For example, larger atoms are more polarizable than smaller atoms due to their outer electrons being less affected by nuclear pull. The more polarizable a molecule is, the stronger its LDF. The LDF also depends on the surface area of the molecule, since greater surface area means more electron cloud that can be temporarily induced to shift, which in turn means higher LDF. Finally, we discussed hydrogen bonds, which occur when a hydrogen atom attached to a highly electronegative element in a molecule (such as nitrogen, oxygen, or fluorine) is attracted to a negatively charged region, like a lone pair of electrons.

    Why this matters

    https://www.americanscientist.org/article/how-gecko-toes-stick

    Weak bonds aren’t actually all that weak, especially when there are a lot of them. The gecko is a perfect example of this. It’s a remarkable animal that can walk up and down walls and even across the ceiling without breaking a sweat! The reason is that its toes are padded with microscopic hairs (called “setae”), and each of these in turn has hundreds of nanoscale branches. One gecko toe can have as many as a billion little hairs! These hairs hold the key to its seemingly magical adhesive abilities and the reason there are so many of them is precisely because of what we learned in this lecture: weak interactions like van der Waals get stronger with more surface area. A billion little nano-strands on each toe gives the gecko’s foot a whole lot of surface area, especially when those strands lie nearly flat against whatever surface the gecko is climbing or sticking to. When the strand runs parallel to a surface, it maximizes the amount of the strand that can engage in van der Waals attraction with that surface. When the strand is more perpendicular, the force is dramatically reduced.

    https://www.smh.com.au/technology/stanford-university-students-create-gecko-gloves-that-allow-humans-to-scale-glass-walls-20141226-12dx31.html

    Herein lies the secret of the Gecko, because it’s not enough for it to have a super-adhesive toe, but it also needs to be able to alternate between super-adhesive and not adhesive at all. Otherwise, it would stick to the wall and not be able to move! So, the Gecko clearly knows all about the dependence of van der Waals attraction on surface area contact, and each time it takes a step it adjusts toe adhesion by changing the angle of its billions of toe hairs.

    This matters not only because it’s such a cool illustration of how amazing nature can be, but also because of how relevant improved reversible adhesives would be for a broad range of applications. Yes, this includes scaling buildings like Tom Cruise in Mission Impossible 4, but it also would impact areas ranging from advanced manufacturing to treating wounds. Just think about what it could mean to have a tape that’s 1000 times stronger than current tape, and can be applied and removed thousands of times without loss of stickiness. Researchers and companies alike have been trying to mimic the Gecko’s adhesive abilities for decades, and still we’re not at a place where we can quite do it, although improved control over the chemistry and nanostructure of materials is getting us closer than ever. It all comes down to engineering the van der Waals attraction.

    Why this employs

    The last weak interaction we discussed in this chapter was that of the hydrogen bond. This bond is incredible in so many ways, not the least of which being the employment opportunities it provides. Hydrogen bonding is prevalent (and often dominant) in determining the way in which proteins fold and unfold, and in the way in which the helices of DNA hold together. It is the reason paper is possible, and the reason fish don’t freeze in the winter in a pond, to name only a few examples. But the example I want to focus on for this Why This Employs is that of surface coatings. The way a surface interacts with water can be engineered by modifying either the way in which the water hydrogen-bonds to the surface, the way in which water hydrogen-bonds to itself, or both.

    Coatings that repel water are needed all over the place, from keeping car windshields clear, to making clothes that stay dry in a rainstorm, to preventing condensation in power station turbines. The chemistry that goes into these coatings holds the key to controlling the hydrogen bonding, and there are a lot of companies interested in hiring people to work with such chemistry. This ranges from medium-sized materials companies like NEI Corporation, which makes a “superhydrophobic” coating called Nanomyte, or UltraTech, which makes spill-containment coatings, or Aculon, which states on their website, “Where there is a surface that has a problem, we like to think we can help solve that problem,” to larger companies like DuPont that makes specialized coatings for the Oil and Gas industry, to small start-ups like DropWise, founded by our very own Course 2 Professor Kripa Varanasi. These are only a handful of examples of the many companies building products around controlling hydrogen bonds.

    Extra practice

    1. At \(\mathrm{T}=25^{\circ} \mathrm{C}, \mathrm{F}_2\) and \(\mathrm{Cl}_2\) are gases, \(\mathrm{Br}_2\) is a liquid, and \(\mathrm{I}_2\) is a solid. This is because (choose one):

    A. London interactions increase with molecular size.
    B. Dipole-induced dipole interactions increase with molecular size.
    C. Dipole-dipole interactions increase with molecular size.
    D. Polarizability increases with molecular size.
    E. London interactions increase with molecular size and polarizability increases with molecular size

    Answer

    E

    2. Based on the following information:

    \(\mathrm{CF}_4\), Molecular Weight \(87.99\), Normal Boiling Point \(-182^{\circ} \mathrm{C}\)
    \(\mathrm{CCl}_4\), Molecular Weight \(153.8\), Normal Boiling Point \(-123^{\circ} \mathrm{C}\)

    The intermolecular forces of attraction in the above substances are described by which of the following (choose one):

    A. gravitational forces

    B. London forces

    C. ion-dipole forces

    D. dipole-dipole forces (permanent dipoles)

    E. repulsive forces

    Answer

    B

    3. Acrylic acid is pictured below.

    Screen Shot 2022-09-04 at 6.21.26 PM.png

    What is the dominant intermolecular force in a solution of acrylic acid? (also, what’s the hybridization around every atom?)

    Answer

    \(\mathrm{H}\)-bonding (all atoms are \(\mathrm{sp2}\) except for the \(\mathrm{O} - \mathrm{H}\), which is \(\mathrm{sp3}\))

    Lecture 14: Phases

    Summary

    Atmospheric pressure is the force per unit area exerted by the mass of the atmosphere onto everything it envelops, and other gasses, liquids, and solids can exert pressure in a similar fashion. In a closed system, an equilibrium develops as molecules convert from one phase to another (like liquid to gas), and vice versa (gas back to liquid). The pressure that one phase exerts on another depends strongly on what intermolecular forces are present in the system. Strong IMFs lead to low vapor pressures, while weak IMFs lead to high vapor pressures.

    The vapor pressure of a material is different than the weight of a gas; rather, it’s related to how likely a liquid or solid is to lose molecules to the gas phase. If the vapor pressure is higher than atmospheric pressure, the weight of the atmosphere isn’t enough to trap the gas that is released, and the material begins to boil.

    So far, boiling has been entirely described with respect to pressure, but temperature makes a difference too! The vapor pressure also depends strongly on temperature: as temperature rises, the vapor pressure increases exponentially. So as a material is heated up, its vapor pressure increases until it exceeds atmospheric pressure and it starts to boil. For example, if we start with water at \(25 \mathrm{C}\), its vapor pressure is about \(0.02\) atm. By the time it reaches \(100^{\circ} \mathrm{C}\), the vapor pressure has increased 50 -fold to reach \(1 \mathrm{~atm}\), at which point it begins to boil.

    We learned previously that a higher temperature yields higher kinetic energy and more molecular movement. However, when a material is "at" a certain temperature, not every molecule moves around the same. The distribution of kinetic energies associated with a given temperature can be very broad, with long tails. So temperature is used to describe the average kinetic energy, but some molecules will have more and some will have less. The Clausius-Clapeyron equation gives the exponential relationship between vapor pressure and temperature:

    \(\ln (P)=-\Delta H_{v a p} / R T+C\)

    Screen Shot 2022-09-04 at 6.28.11 PM.png

    Diagram is in the public domain. Source: Wikimedia Commons.

    where \(\Delta H_{\text {vap is }}\) is the enthalpy of vaporization, or the energy per mole required to convert a liquid molecule into a gas molecule. There is a latent heat or enthalpy involved in every kind of phase change: \(\Delta H_{v a p}\) for vaporization, \(\Delta H_{f u s}\) for fusion, and \(\Delta H_{\text {sub for }}\) sublimation. The enthalpy of sublimation is always larger than the enthalpy of fusion, because the solid-state atoms/molecules are more strongly bonded than liquid atoms/molecules. Hess' law tells us that any path from one state to another is a valid path, because the total energy required to change states will be the sum of energy differences along the path.

    Phase diagrams are helpful to know what the equilibrium phase of a material is at a given temperature and pressure. Phase boundaries are called coexistence curves: at these points, two phases exist simultaneously. If a phase boundary is crossed, a phase change will occur. Phase diagrams are like “maps” of a material.

    Why this matters

    Screen Shot 2022-09-04 at 6.29.32 PM.png

    Heat is an essential form of energy transfer for human beings, not only for maintaining body temperature but also for a range of primary and advanced activities, from cooking and sterilizing, to operating engines and generating electricity. Control of thermal energy to benefit humanity dates back to pre-historic times and heat remains the single most valuable energy currency of our existence. As illustrated in this figure, nearly 90% of all energy consumed globally is either generated by or consumed as thermal energy at some point during the supply/demand cycle. It is therefore surprising, even striking, that to date we have no viable means for storing thermal energy. Unlike mechanical energy, where material can be simply pumped up a potential energy hill and held there until needed as in the case of pumped hydro, or electrical energy where electrons can be pushed up an electrochemical hill and held there as in a battery, for the case of phonons – the carriers of thermal energy – there is no such stability. Instead, thermal energy rolls back down the energy hill and cannot be stopped. Heat always dissipates. It leaks and we cannot contain it no matter how hard we work to insulate a thermal reservoir.

    The slow leaking of thermal energy out of a material that’s hotter than its surrounding is called “sensible heat.” Eventually, enough energy will be transferred to the environment to make it the same temperature as the surroundings. In fact, if you take a thermodynamics class you’ll learn that the very definition of temperature is, “that which is equal when heat ceases to flow between systems in thermal contact.” So one way to store thermal energy is to heat something up, then that something will leak its heat out at some rate into some environment until its temperature is the same as that of the environment. The storage and release of sensible heat is used all the time, from a heating pack that you microwave and put on your sore neck to focusing sunlight on a concrete slab as in a solar thermal power plant, to the old idea of heating an entire mountain for long-term energy generation as it slowly cools down. (that last one hasn’t been tried yet in case you’re wondering. . . but it’s a neat concept!). But whether a heat pack or a mountain, the problem with sensible heat is that you get what you get with almost no control. Apart from how much insulation you package your material with, there aren’t many good ways to slow and no ways at all to stop the flow of heat. Because in this mode of thermal storage, we heat something up to some temperature and it starts cooling right away, there’s also no control over the temperature you get out of the stored energy: it’s constantly changing as it cools.

    Ah, but there is in fact a way to "hold on" to thermal energy, and then release it back as heat when needed, much like electricity in batteries. And it gives you complete and total control over the release temperature. In fact, a simple bucket of wax is a thermal storage system. It works by taking advantage of its phase change: when wax melts it changes from solid to liquid, and the energy required to make this phase change happen is large (that would be the enthalpy of fusion, \(\Delta H_{f u s}\), as discussed in the lecture). When the wax cools back down it becomes a solid again, and all that phase change energy is given back out upon solidification. Materials that are used like this, where the enthalpy of one of its phase changes is used as a way to store and release thermal energy, are called "phase change materials" (PCMs). It's a bit of an odd name since all materials are phase change materials, but the PCM label has stuck as a way to refer to ones where we try to do something useful technologically or scientifically with the energy stored/released by the phase change.

    PCM's are used in many applications, including the simple heating pack (which I'll get to in a moment), but also in high temperature applications. Take the molten salts used in solar thermal power plants as an example. Sunlight is focused onto a salt or mixture of salts (like sodium and potassium nitrate for example), which melts the salt into a liquid during the day. When it cools down at night the now-solidifying salt maintains a precise temperature at its phase change for a long period of time (which is good for a power plant), until it freezes completely back into a solid. There are many other PCM materials and many other uses of PCM materials, and knowing the phase diagram so that one can dial in exactly the right temperature of sto red heat release is the first step to making it possible!

    Screen Shot 2022-09-04 at 6.31.42 PM.png

    And let me conclude this Why This Matters with one more striking fact. I started by explaining that 90% of all energy is generated/consumed as thermal energy. But a massive 60% of the heat generated to power our world is wasted! We could capture and recycle much of that energy. There’s a lot of interest and research in the development of new PCMs with higher storage densities (higher enthalpies of fusion), which means new chemistries. The ability to make a new PCM that possesses all of the key desirable metrics for a given application requires a deep understanding of the intermolecular interactions in the PCM which in turn lead to its phase diagram.

    There’s also exciting new work towards making PCMs “triggerable,” so that it would stay stuck in the liquid state until some external trigger is applied. Then, when triggered, it would solidify and release all of its phase change energy, ready to be “charged” again as a solid. This already happens in some cases, and it takes advantage of the phenomenon of supercooling that we discussed in the lecture. In a pack of sodium acetate trihydrate (chemistry shown here), for example, it’s fairly easy to supercool, so when the material heats up past its melting point of 58C it becomes a liquid but it can be cooled back down to room temperature without solidifying right away. It stays metastable in a supercooled state, until some trigger is applied like a little external mechanical force (or an ice cube would do it too). That trigger reminds the material that it actually wants to be a solid at this temperature, in other words it provides a nucleation site for the solid phase, and then since it’s a good 30°C below the melting point it all ends up solidifying and giving off all its phase change (enthalpy of fusion) energy. There’s a lot of interest and work towards developing new ways to keep PCMs supercooled, which in turn could make thermal energy storage look much more like electrical energy storage: triggerable and distributable energy on demand. That, in turn, would revolutionize the energy consumption landscape!

    Why this employs

    How can a phase diagram get you a job? It’s easier than you might have thought! Many companies make and use materials (like, almost all), and in the use of materials many companies have needs. These could mean the development of completely new materials or it could mean the slight tweaking of existing materials properties. But either way, the starting place for any materials design is the phase diagram of the material. And one place where that starts is in knowing how to calculate phase diagrams. There are companies you could work at that directly make and sell sophisticated software to compute phase diagrams (check out thermo-calc or factsage for example), and then there are the thousands of companies that post jobs where knowledge of how to use such software is required.

    Want to design new rocket engines? Lockheed has openings where deep knowledge of thermodynamics is required. Or how about the “thermodynamics engineer” posting at United Launch Alliance (which makes rocket engines too). Or if you want to make buildings more energy efficient you could join PassiveLogic, which is working on, “developing the next generation control technology for the future of smart cities,” and is currently seeking a, “thermodynamics and simulation intern.” Or you could join one of MIT Course 3’s spin-outs, Boston Metal, which is working on greener metal production and has a current opening that requests applicants have a, “strong understanding of thermodynamics/phase equilibrium.” Of course, this list goes into the thousands because ultimately any form of engineering uses materials, and the form of materials used is based on its phase diagram.

    Extra practice

    1. Rank the following molecules in ascending vapor pressure order:

    i) Hexane ii) Methanol iii) Isooctane
    Screen Shot 2022-09-04 at 6.34.09 PM.png Screen Shot 2022-09-04 at 6.34.23 PM.png Screen Shot 2022-09-04 at 6.34.30 PM.png
    Answer

    Methanol has the least vapor pressure because its \(\mathrm{OH}\) group allows it to hydrogen bond to other methanol molecules. Next is hexane because, though it does not have hydrogen bonding capabilities, it is more linear than isooctane so can experience greater London dispersion forces. Isooctane has the highest vapor pressure because it has no hydrogen bonding capability and is less stackable than hexane so it has less London dispersion forces present.

    2. Three ice cubes are used to chill a soda at \(20 \mathrm{C}\). The soda has a mass of \(0.25 \mathrm{~kg}\). The ice is at \(0 \mathrm{C}\) and each ice cube has mass \(6.0 \mathrm{~g}\). Find the temperature when all ice has melted.

    Answer

    The ice cubes are at the melting temperature of \(0 \mathrm{C}\). Heat transfers from the soda to the ice. Melting of the ice occurs in two steps: first the phase change occurs and solid ice transforms into liquid water at the melting temperature, then the temperature of this water rises. Melting yields water at \(0 \mathrm{C}\), so more heat is transferred from the soda to the water until the water plus soda system reaches thermal equilibrium.

    \[Q_{\text {ice }}=-Q_{\text {soda }} \nonumber\]

    The heat transferred to the ice is:

    \[Q_{i c e}=m_{i c e} \times L_f+m_{i c e} \times c_W \times\left(T_f-20 C\right) \nonumber\]

    Bringing all terms involving \(T_f\) on the left-hand-side and all other terms on the right hand side, solve for the unknown quantity \(T_f\):

    \[T_f=\dfrac{m_{\text {soda }} \times c_W \times 20 C-m_{i c e} \times L_f}{\left(m_{\text {soda }}+m_{i c e}\right) \times c_W} \nonumber\]

    The resulting value is \(13 \mathrm{C}\).

    3. Imagine a substance with the following points on the phase diagram: a triple point at \(.5\) atm and \(-5^{\circ} \mathrm{C}\); a normal melting point at \(20^{\circ} \mathrm{C}\); a normal boiling point at \(150^{\circ} \mathrm{C}\); and a critical point at \(5 \mathrm{~atm}\) and \(1000^{\circ} \mathrm{C}\). The solid liquid line is "normal" (meaning positive sloping).

    a) Roughly sketch the phase diagram, using units of atmosphere and Kelvin.

    Answer

    1-solid, 2-liquid, 3-gas, 4-supercritical fluid, point \(\mathrm{O}\)-triple point, \(\mathrm{C}\)-critical point \(-78.5{ }^{\circ} \mathrm{C}\) (The phase of dry ice changes from solid to gas at \(-78.5{ }^{\circ} \mathrm{C}\))

    b) Describe what one would see at pressures and temperatures above 5 atm and \(1000^{\circ} \mathrm{C}\).

    Answer

    One would see a super-critical fluid, when approaching the point, one would see the meniscus between the liquid and gas disappear.

    c) Describe what will happen to the substance when it begins in a vaccum at \(-15^{\circ} \mathrm{C}\) and is slowly pressurized.

    Answer

    The substance would begin as a gas and as the pressure increases, it would compress and eventually solidify without liquefying as the temperature is below the triple point temperature.

    Lecture 15: Electronic bands in solids

    Summary

    When two atoms come together, their atomic orbitals combine to form a set of bonding and antibonding molecular orbitals (\(\mathrm{MO}\)s). When \(10^{24}\) atoms come together to form a solid, their atomic orbitals combine to form a continuous band of electronic states. The bands are filled up with electrons: how full they get depends on how many electrons each constituent contributes and how closely spaced in energy the bands are.

    Screen Shot 2022-09-04 at 6.46.46 PM.png

    For example, here is the band energy diagram for \(\mathrm{Na}\), \(\mathrm{Mg}\), and \(\mathrm{Al}\). All of these elements have full \(1\mathrm{s}\), \(2\mathrm{s}\), and \(2 \mathrm{p}\) bands: these "core bands" are therefore completely filled with electrons. \(\mathrm{Na}\) has one additional \(3 \mathrm{~s}^1\) valence electron, which fills up \(1 / 8\) of the combined \(3 \mathrm{~s} / 3 \mathrm{p}\) band. Similarly, \(\mathrm{Mg}\) has valence electron occupation of \(3 \mathrm{~s}^2\) and \(\mathrm{Al}\) has \(3 \mathrm{~s}^2 3 \mathrm{p}^1\), which fill up the \(3 \mathrm{~s} / 3 \mathrm{p}\) band even more.

    Here, the \(3 \mathrm{~s}\) and \(3 \mathrm{p}\) bands are shown as one combined band: when bands are very close in energy, they can overlap, and essentially combine to form one continuous set of states. When there is an energy gap between the most energetic electrons that live in one band and the least energetic electrons that live in the next band, a band gap forms, like those shown here between \(1\mathrm{s}\) and \(2\mathrm{s}\), \(2\mathrm{s}\) and \(2 \mathrm{p}\), and \(2 \mathrm{p}\) and \(3 \mathrm{~s} / 3 \mathrm{p}\) bands. Note that the term band gap is reserved to describe only one of these band gaps, as discussed below.

    Knowing the band filling tells us a lot about the properties of the solid. If the highest energy band is partially filled, the material is a metal. If the highest filled band is completely full of electrons, then there is a gap between it and the next-highest band: this is the band gap of the material, and these solids are called semiconductors. In semiconductors, the highest energy band that is completely filled with electrons is called the valence band, and the next highest unoccupied band is called the conduction band. If the band gap is very large, say \(>4\) or \(5 \mathrm{eV}\), then the material is electrically insulating.

    The band gap has a property similar to that in the antibonding \(\mathrm{MO}\) or nodes in higher-energy \(\mathrm{AO}\)s: namely, all of the energies within the band gap are forbidden, so all of the electrons must be in a band, not in-between. Within a band, electrons can move around: if there is no gap and the material is a metal, it is easy for electrons to freely move around, and conductivity is high. It’s harder for electrons to move around in a semiconductor, but it’s possible for an electron to get an energy boost and transition from one band to another; often, these boosts come from light or heat. Of course, it takes less energy to cross a small gap between two states than a large one: it is much more likely for thermal energy (about \(0.025 \mathrm{eV}\)/atom at room temperature, \(300\mathrm{K}\)) to excite an electron into the conduction band of a semiconductor like \(\mathrm{Si}\) than an insulator like diamond (\(\mathrm{C}\)).

    Electrons can also be excited into the conduction band via light: semiconductors with energy in the range from about \(1.5-3 \mathrm{eV}\) can be excited by visible light (about \(400-800 \mathrm{~nm}\) ). If a photon's wavelength corresponds to just the right energy to match the bandgap, that electron can jump across the forbidden energy region. In this case, it jumps to the conduction band and becomes free to move around! With this in mind, it is possible to control the flow of electrons across the gap-and the conductivity of the semiconductor-with light. If the band gap is larger than the lowest wavelength visible light, the material can't absorb higher-energy photons, and the material is transparent in the visible light range.

    Why this matters

    Screen Shot 2022-09-04 at 6.57.41 PM.png

    The world we live in uses a lot of light. And in case you haven’t noticed, more and more of that light comes from LEDs, or light emitting diodes. Whether it’s your phone or laptop or television or car or fridge or house or office or street, odds are the way it lights up is with LEDs. And those work because they are semiconductors. As discussed in lecture, a semiconductor forms when the bonding in a material together with its structural arrangement leads to the formation of a band gap of just the right size (a few \(\mathrm{eV}\)). From this we immediately have two well-known devices. First, electrons in the valence band maximum (VBM) can be excited by light into the conduction band minimum (CBM) where they conduct, a device otherwise known as a photo-detector. Second, we can have the opposite: put electrons into the CBM and as they cascade down to the VBM, photons are emitted, a device otherwise known as an LED.

    In lecture 3, I used television as the topic of Why This Matters and mentioned that the reason color TV took so long to develop was that stable, inexpensive, red phosphors were a challenge. For LEDs it turns out it was also a particular color that delayed their ubiquity, but in this case it was blue, on the higher end of the gap. For phosphors it was chemistry that held the key to inventing new molecules that could absorb electrons and shine red. Once again, 50 years later, it was chemistry that held the key to unlocking a new material, in this case a semiconductor that could take a current of electrons and convert them into blue light. The trick comes in being able to engineer the band gap of the solid.

    Check out this plot of band gap vs. the lattice constant, which is a measure of the spacing between atoms in a periodically repeating crystal. We’ll be digging into crystals in a few weeks, but for now you can consider the x-axis to be simply any structural feature of the solid. The point is that there is a strong dependence, and once this dependence was known and understood, then new alloys could be developed with just the right chemistry and therefore structure, to give just the right bandgap and therefore color.

    Screen Shot 2022-09-04 at 6.58.57 PM.png

    Take a look at \(\mathrm{GaN}\) on the plot and notice that its bandgap is just a tad too high if we want blue to be emitted as electrons cascade from its CBM to VBM. On the other hand, other materials like InN have bandgaps that are far lower in energy than what is needed to emit blue. It turns out that alloying different materials together is one of the most effective ways to “tune the bandgap,” and that’s exactly how scientists got to blue. They already had red and green, so once they got blue they were able to combine them all together to get white LEDs, which have completely taken over the market since.

    The idea of purposefully modifying the bandgap of a material is called bandgap engineering, and it is the centerpiece of the semiconductor revolution. It reaches far beyond LEDs, as the bandgap is a crucial property in lasers, transistors, detectors, and so much more of our electronic world.

    Why this employs

    How can knowledge of the band structure of a solid lead to a job? This may, in fact, be the easiest Why This Employs section to write of the whole semester! Semiconductors are so ubiquitous in our world that it’s nearly impossible to move without interacting with them. They’re in almost any and every electronic device. They’re so important some would call this the age of the semiconductor. Moore’s law depends on making tinier devices out of them each year. They make electricity from the sun. They provide light from electricity. They communicate for us. They compute for us. I could go on.

    The semiconductor industry is massive and there are many jobs in many companies, and many MIT students go on to work in this industry. There are so many jobs in the semiconductor industry that there’s a website called semiconductorjobs.com. And many other similar websites. Even more interestingly, the industry is no longer just about making faster chips or specialized chips like for wireless communication. It’s gotten so much bigger because of all of the various needs for these materials and devices, from consumer electronics to smart cars to Hollywood to space exploration to artificial intelligence. There are jobs related to all of these areas, both in the traditional semiconductor industry (think, Intel, AMD, Semtech, TI, and dozens more), but also in the companies working on these applications (think, Bosch, Toyota, Dreamworks, Microsoft, Google, and hundreds, maybe thousands more), as well as companies doing both, like Samsung.

    Example Problems

    1. Suppose your LED is made of silicon and you want to make it absorb higher wavelength light. Should you alloy it with germanium or carbon? Explain your answer.

    Answer

    Absorb higher wavelength = smaller bandgap. Thus alloy with \(\mathrm{Ge}\) since larger atom = smaller bandgap.

    2. Below is a plot of sunlight intensity at Earth’s surface as a function of wavelength.

    Screen Shot 2022-09-04 at 9.34.56 PM.png

    Calculate the band gap wavelength of each of these well-known semiconductors, and mark the range of wavelengths of light that each semiconductor could absorb. Which do you think would make the worst solar cell? Choose one:

    Answer

    Use the good old energy to wavelength conversion trick:

    \[E(e V)=\dfrac{1240}{\lambda(n m)} \nonumber\]

    A. \(\mathrm{Si}\)  (band gap \(1.14 \mathrm{eV})=1088 \mathrm{~nm}=1.088\) microns

    B. \(\mathrm{Ge}\) (band gap \(0.67 \mathrm{eV})=1851 \mathrm{~nm}=1.851\) microns

    C. \(\mathrm{GaAs}\) (band gap \(1.39 \mathrm{eV})=892 \mathrm{~nm}=0.892\) microns

    D. \(\mathrm{InSb}\) (band gap \(0.16 \mathrm{eV})=7750 \mathrm{~nm}=7.750\) microns

    Each material will absorb light with wavelengths shorter than this band gap wavelength (so everything to the left of that wavelength on the graph).

    The InSb absorption spectrum will extend well off the side of the graph (it absorbs EVERYTHING) - but because the bandgap of InSb is so small, the electrons it excites don't carry very much energy (they lose the excess to heat) so it would be a horrible solar cell.

    A. \(\mathrm{Si}\) (band gap \(1.14 \mathrm{eV}\))
    B. \(\mathrm{Ge}\) (band gap \(0.67 \mathrm{eV})\)
    C. \(\mathrm{GaAs}\) (band gap \(1.39 \mathrm{eV}\) )
    D. \(\mathrm{InSb}\) (band gap \(0.16 \mathrm{eV}\))

    3. You have three different materials: \(\mathrm{AlP}\) (band gap \(2.45 \mathrm{eV}\)), \(\mathrm{SiC}\) (band gap \(3.0 \mathrm{eV}\)), and \(\mathrm{CdSe}\) (band gap \(1.74 \mathrm{eV}\)). Which of the following three geometric arrangements is likely to be the most efficient at converting sunlight to electricity? Explain your answer.

    Screen Shot 2022-09-04 at 9.37.46 PM.png

    Answer

    Top left: all wavelengths absorbed and least thermal energy released.

    Lecture 16: Semiconductors and Doping

    Summary

    Semiconductors have this name because their conductivity of electrons lie somewhere between those of insulators and metals. The band gap of a semiconductor, meaning the space between its conduction band and valence band, is larger than that of a metal but smaller than that of an insulator. Initially, the electrons are bound to the lattice (the arrangement of atoms in the solid) and cannot conduct. Their energy states are within the valence band. In order for electrons to conduct, they must obtain enough energy to leave the valence band and enter the conduction band, which involves passing the band gap (forbidden energy states).

    Screen Shot 2022-09-04 at 9.43.50 PM.png

    Two ways in which these initially bound electrons can achieve conduction are via a thermal energy transfer and through doping, which involves introducing impurities (a relatively small number of foreign atoms compared to the number of atoms in the lattice) to increase the density of charge carriers (electrons or holes, the absence of an electron which essentially acts as a positive charge). (Note: there’s a third way we discussed in the last lecture, by kicking electrons above the gap using photons). There are two types of doping, \(\mathrm{p}\)-type and \(\mathrm{n}\)-type. \(\mathrm{P}\)-type doping involves adding impurities with fewer electrons than the atoms in the undoped lattice (i.e. aluminum doped into silicon). \(\mathrm{P}\)-type doping creates holes. The presence of these holes enables conduction in the lattice and the creation of an acceptor energy level.

    Screen Shot 2022-09-04 at 9.45.37 PM.png

    \(\mathrm{N}\)-type doping involves adding impurities with more electrons than the atoms in the undoped lattice (i.e. phosphorus doped into silicon). \(\mathrm{N}\)-type doping introduces additional electrons (negative charge carriers). These electrons have an energy corresponding to the donor level, and are able to conduct with a much smaller addition in energy compared to the electrons in the valence band.

    Why this matters

    Screen Shot 2022-09-04 at 9.48.09 PM.png

    The entire semiconductor industry is built on ways to dope \(\mathrm{Si}\), \(\mathrm{Ge}\), \(\mathrm{GaAs}\), and other semiconductors with precise amounts of desired impurities, to make \(\mathrm{n}\)-type and \(\mathrm{p}\)-type semiconductors which then get put in the billions into a single chip in extremely complicated arrangements. This is how the very first transistor was made, by putting an \(\mathrm{n}\)-type and \(\mathrm{p}\)-type semiconductor in contact with one another. You can learn all about the very first transistor in so many great resources (like this short video explanation for example, https://www.youtube.com/watch?v=JBtEckh3L9Q). Here’s a picture of that very first transistor, which doesn’t look like what you’d see on a computer chip today–but it’s the first p-n junction and it opened the door to a revolution.

    Just to get a little perspective: In 2014 250 billion billion transistors were made, which amounts to a pace of 8 trillion transistors per second. Here’s a chart of what we call Moore’s law, which isn’t a law at all but rather simply what happens when a lot of incredibly smart scientists and engineers work on something for 60 years. 

    Screen Shot 2022-09-04 at 9.51.01 PM.png

    Speaking of smart, how do we, as humans, do our natural computing? How do the computers we make stack up to the brains we use to make them? You see, Moore’s law looks, and is, impressive, but it’s a measure of computing power, not power consumption. Data centers alone currently account for 3% of global \(\mathrm{CO}_2\) emissions and estimates are that within 10 years over 20% of all electricity will be consumed by IT. Most of this energy goes into sending current through trillions and trillions of \(\mathrm{p}\)-type and \(\mathrm{n}\)-type semiconductors.

    Just for fun, let’s compare the power consumed by a computer that has the equivalent number of processors as the human brain. A typical adult human brain has about \(10^{11}\) neurons, or nerve cells, each one connected to about \(10,000\) other neurons via connectors called synapses, which are the superhighways of information processing in the brain. A brain has a total of around \(10^{14}\) synapses. With 1 transistor being equivalent to about \(10\) synapses, we can build a computer as powerful as the brain, or at least with as much computing capacity, with \(10^6\) computer chips (each chip packing \(10\) billion transistors). This would require around \(10\mathrm{MW}\) to power. That’s \(10,000,000\) Watts of power needed to run the artificial brain. In comparison, the human brain runs on about \(20\) Watts!

    The reason I’m making this comparison in our Why This Matters is that the \(\mathrm{p} - \mathrm{n}\) junction and the transistor brought forth an amazing revolution spanning many orders of magnitude. But we’re still short another 6 zeros on power consumption reduction, if we ever want to try to match the brain, for which another revolution is clearly needed.

    Why this employs

    Since I brought up the human brain and compared it to computer chips, let’s focus this section on the brain-machine interface. This has been, of course, the subject of many science fiction movies (yes, The Matrix), but it’s also been the focus of many research labs and government programs for at least the past 30 years. This is because of the massive potential in enabling the human-brain interface to help patients with a wide range of neurodegenerative disease. And all of this work has brought us to quite an exciting time in efforts to merge the brain with machines, one in which many new commercial entities are forming.

    Elon Musk, for example, launched Neuralink a few years ago. This is a pretty exciting company not only because of its initial concept, but also because they have brought in highly talented and interdisciplinary teams of scientists, doctors, and engineers. Many of the people involved in such an effort will need to know about semiconductors and doping, and in this case the serious challenge of creating electronics that are “brain friendly” (take a look at the work of Professor John Rogers, as an example). A number of companies have invested in this area, like Boston Scientific, Abbot, Blackrock Microsystems, CorTec, or NeuroNexus, to name only a few. And government programs are also growing efforts, for example through DARPA or the NIH BRAIN Initiative.

    Example problems

    1. Doping levels in semiconductors typically range from \(10^{13} / \mathrm{cm}^3\) to \(10^{18} / \mathrm{cm}^3\), depending on the application. In some devices (like transistors) you need several \(\mathrm{n}\)- and \(\mathrm{p}\)-type materials with different doping concentrations.

    \(\mathrm{Si}\) has a density of \(2.328 \mathrm{~g} / \mathrm{cm}^3\). What is the ratio of \(\mathrm{Si}\) atoms to dopant atoms in \(10^{13} / \mathrm{cm}^3\) doping and \(10^{18} / \mathrm{cm}^3\) doping?

    Answer

    To find the ratio between \(\mathrm{Si}\) atoms and dopant atoms, we need to find the number of \(\mathrm{Si}\) atoms per cubic \(\mathrm{cm}\). This is a straightforward stoichiometry problem:

    \[\dfrac{2.328 \mathrm{gSi}}{1 \mathrm{~cm}^3 \mathrm{Si}} \times \dfrac{1 \mathrm{molSi}}{28 \mathrm{gSi}} \times \dfrac{6.022 \times 10^{23} \text { atomsSi }}{1 \mathrm{molSi}}=\dfrac{5 \times 10^{22} \text { atomsSi }}{\mathrm{cm}^3} \nonumber\]

    Now we take the ratio of \(\mathrm{Si}\) atoms to dopant atoms, obtaining:

    a) \(5 \times 10^{22}: 10^{13}=5,000,000,000: 1\)

    b) \(5 \times 10^{22}: 10^{18}=50,000: 1\)

    a) \(10^{13} / \mathrm{cm}^3\)

    b) \(10^{18} / \mathrm{cm}^3\)

    2. You dope \(\mathrm{Ge}\) with \(2.43 \mathrm{mg} \mathrm{Mg}\).

    a. What kind of doping is this? What charge carriers are introduced?

    Answer

    \(\mathrm{p}\)-type doping; holes are introduced

    b. How many carriers does each substitution yield?

    Answer

    each substitution yields 2 carriers

    c. Calculate the number of charge carriers created by the addition of \(2.43 \mathrm{mg} \mathrm{Mg}\).

    Answer

    \begin{aligned}
    2.43 \mathrm{mg} \times 10^{-3} \frac{\mathrm{g}}{\mathrm{mg}} \times \frac{1 \mathrm{~mol}}{24.3 \mathrm{~g}}=10^{-4} \mathrm{~mol} \\
    \text{2 carriers per substitution: } 2 \times 10^{-4} \mathrm{~mol} \text{ holes} \\
    2 \times 10^{-4} \times 6.602 \times 10^{23}=1.32 \times 10^{20} \text { holes }
    \end{aligned}

    d. How much \(\mathrm{Ge}\) should you start with if you want a charge carrier density of \(1016 / \mathrm{cm} 3\)?

    Answer

    \begin{aligned}
    10^{16} \text{carriers} / \mathrm{cm}^3=1.32 \times \frac{10^{20}}{x} \\
    x=1.32 \times 10^4 \mathrm{~cm}^3 \mathrm{Ge} 
    \end{aligned}

    3. For \(13 \mathrm{~cm} 3\) of \(\mathrm{Si}\), calculate the number of milligrams of \(\mathrm{B}\) atoms needed in order to have \(3.091^* 10\) 17 carriers/\(\mathrm{cm}^3\). Assume that the dopant only substitutionally incorporates into the \(\mathrm{Si}\).

    Answer

    \(\mathrm{p}\)-type: carriers are holes

    \(13 \mathrm{~cm}^3 \times \dfrac{3.091 \times 10^{17 \text { carriers }}}{1 \mathrm{~cm}^3} \times \dfrac{1 \text { atom }}{1 \text { carrier }} \times \dfrac{1 \mathrm{~mol}}{6.022 \times 10^{23} \text { atoms }} \times \dfrac{10.81 \mathrm{~g}}{1 \mathrm{~mol}} \times \dfrac{1000 \mathrm{mg}}{1 \mathrm{~g}}=0.0721 \mathrm{mg}\)

    Lecture 17: Metallic Bonds and Properties of Metals

    Summary

    Metals are formed from atoms that have partially-filled electronic bands: the valence electrons are loosely bound, forming a sea of electrons throughout the solid. The liberated valence electrons are free to move around the fixed cations: they are shared by all of the atoms in the solid, rather than fixed to one specific atom. The bonds that form between metal atoms are called metallic bonds. Unlike ionic or covalent bonds, where the electrons involved in the bond are closely bound up with the ions, metallic bonds don’t belong to one ion or another.

    Metals are characterized by their high electrical conductivity, high thermal conductivity, high heat capacity, ductility, and luster. Each of these properties can be correlated to the electronic structure and the sea of electrons picture.

    Because the electronic bands in metals overlap—there is no band gap—electrons can move freely through the closely-spaced energy states. If an electron at the surface is perturbed, it is easy for the energy to transfer across the solid: the Drude model for conduction relates electron drift to applied field, yielding conductivity. The free electrons can also vibrate, so solid metals can efficiently transfer thermal energy, or heat. The thermal conductivity of metals is directly proportional to the electrical conductivity through the Wiedemann-Franz law. The properties of metals can also be controlled via alloying. For example, bronze is a metal alloy with \(\mathrm{Cu}\) and \(\mathrm{Sn}\) in a 4:1 ratio. Though the presence of \(\mathrm{Sn}\) lowers the overall conductivity to 10% of pure copper, bronze is much more resistant to corrosion. Brass is a metal alloy with \(\mathrm{Cu}\) and \(\mathrm{Zn}\) in a 2:1 ratio. Though the conductivity is 28% of pure copper, the brass is much more malleable than pure copper.

    Two mechanical properties are shared by most metals: malleability and ductility. Both are a measure of how much a material can deform without breaking. Malleable materials can be compressed a lot without breaking, while ductile metals can be stretched a lot without breaking. High malleability and ductility allow metallic solids to be drawn into wires, or hammered into complex shapes. For example, a platinum rod 1cm in diameter and 10cm long can be drawn into \(\equiv 2777 \mathrm{km}\) of wire just by pulling. Surprisingly, these mechanical properties can be directly correlated to the electronic structure. As the solid is deformed and ions move around, the sea of electrons can easily move around, allowing atoms to rearrange without breaking enough bonds to lead to fracture.

    Periodic trends in metallic bond strength are directly related to electronic structure. Metallic bonds are typically weakest for elements with nearly empty (\(\mathrm{Cs}\)) or nearly full (\(\mathrm{Hg}\)) valence subshells, and strongest for elements with half-filled valence shells (\(\mathrm{W}\)). When the valence band is half-filled with electrons, the ratio of bonding orbitals: antibonding orbitals is maximized. In \(\mathrm{MO}\) theory, atomic orbitals combined to form bonding and antibonding states. In the band picture, the bottom of the valence band represents \(100 \%\) bonding character and the top of the band represents \(100 \%\) antibonding character. Therefore, half-filled valence bonds yield the strongest metallic bonds. For transition metals, the atomic orbitals are so close in energy that they often overlap, creating wide energy bands with shared characteristics. Metals in groups 6-9 are most likely to have \(\approx\) half-filled valence bands, so the metallic bonds they form are strongest.

    Why this matters

    Corrosion of metals is the worst. The global cost of corrosion is estimated to be $2.5 trillion. Yes, trillion! This is equivalent to 3.4% of the global GDP in 2013. We don’t think much about corrosion in our day-to-day lives. We think about it as the rusting of the spokes of a bike wheel or some old-looking gears. But if a pipe corrodes enough, it can crack. If a boat corrodes enough, it can sink. These are extremely serious problems that involve not only a dramatic loss of money, but also a serious safety risk and potential loss of life. If you dig into the topic of corrosion, you’ll find that many industries have adopted what they call, “corrosion management systems.” What this really means is that they are aware it’s a big problem, and they’re going to try to do something about it. But what can they do? The answer to that question is, of course, chemistry. And because of chemistry, $500 billion annually is saved by preventing, or at least dramatically slowing, corrosion. Much more work needs to be done.

    Screen Shot 2022-09-04 at 10.25.59 PM.png

    The automotive industry is an excellent example of an industry that has moved from minimal corrosion control in the 1970’s when the life of a car was typically set by the corrosion of the body and frame, to state-of-the-art corrosion control through advanced painting/coating technology and the use of corrosion-resistant materials. In this chapter I mentioned the use of alloying as a way to make metals more resilient, with the example of bronze which is copper with a dash of tin. This is exactly how anti-corrosion coatings are developed and they can come in the form of paints or additional metal alloy layers or treatment of existing metal surfaces. Large structural metals like those used in bridges or wind turbines are often treated with zinc and aluminum-based coatings to provide long-term corrosion prevention, while many steel and iron fasteners are coated with a thin layer of cadmium to block hydrogen absorption, which can lead to stress cracking. Nickel-chromium and cobalt-chromium alloys are often used as corrosion prevention coatings because they have very low levels of porosity which makes them extremely resistant to moisture, which in turn inhibits rusting. Oxide ceramics like alumina (which remember we’ve already discussed when we talked about ionic bonds!) can make excellent and strong coatings.

    The chemistry of corrosion is a highly active and extremely important area of research, and deeper understanding of the chemistry of how it happens, as well as how chemistry can be used to prevent it, represents a continued massive challenge and enormous opportunity.

    Why this employs

    How can metal make a job? Let’s go with an example that’s a big part of our future, and is actually really cool: 3D printing. Of metals. In a few short years, 3D printing of metals has gone from concept to fringe to near-mainstream. Many companies can now 3D print metal parts and many innovations are still happening at these companies. What kind of people do these companies hire? Those who know about the electron sea, of course!

    There are many companies in this space, including one of our very own, an MIT-based spin-out called Desktop Metal that formed in 2017. One of the exciting aspects of 3D metal printing is that there’s no one standard, at least not yet. So many different companies are working on pushing the frontiers of different technologies, from Desktop Metal which uses something they call Bound Metal Deposition, to Concept Laser which sinters metals with a laser and is now owned by GE, to Arcam (also now owned by GE) which uses an electron beam to melt the metal, to Xact Metal which uses something called Powder Bed Fusion, to ExOne which uses something called binder jetting, to Vader Systems which utilizes magnetic fields, to Cytosurge which claims nanometer resolution, to many more companies and many more approaches to the printing process. All of these approaches require continued development of new chemistries (“inks”) and new ways to manipulate those into tailored, complex, and high-resolution shapes.

    Example Problems

    1. Metals vs. semiconductors

    a) Why are metals better electrical sonductors than semiconductors?

    Answer

    Metals do not have a band gap, so their electrons can freely conduct, giving rise to the “sea of electrons” effect

    b) How often do doped semiconductors reach the electrical conductivity levels of metals?

    Answer

    Doping in semiconductors is never enough to reach the conductivity levels of metals. This is because the concentration of dopants (the impurities that contribute electrons or holes to the bulk material) is extremely small and is not enough to alter the material’s band structure.

    2. Why is silver (\(\mathrm{Ag}\)) more electrically conductive than platinum (\(\mathrm{Pt}\))?

    Answer

    \(\mathrm{Ag}\) is more electrically conductive than \(\mathrm{Pt}\) because of atomic radius–increased atom size leads to increased atomic spacing, which increases electron movement.

    3. Rank the following metals in order of increasing metallic bond strength, and explain your reasoning:

    A. \( \mathrm{Co}\)

    B. \(\mathrm{Cd}\)

    C. \(\mathrm{W}\)

    D. \(\mathrm{Ni}\)

    Answer

    Metallic bonds tend to be weakest for elements that have nearly empty or nearly full valence shells. They are strongest for those with half-filled valence shells. Therefore,

    weakest: \(\mathrm{Cd}, \mathrm{Ni}, \mathrm{Co}\),                             strongest: \(\mathrm{W}\)

    Lecture 18: The Perfect Solid: Crystals

    Summary

    Screen Shot 2022-09-04 at 10.31.54 PM.png

    Crystalline solids possess long-range order: the arrangement of the bonds plays an important role in determining the properties of crystals. Each crystal has a unit cell, a repeating unit like a stamp that can be tiled into a pattern. Often, there are many unit cells that could be chosen for a given crystal; it’s conventional to select the smallest unit cell that contains all of the necessary information. In the figure, (a), (b), and (c) all show valid unit cells and how they tile, but (d) isn’t valid because when tiled, it doesn’t cover all the space: there are gaps. In 3D space, unit cells comprised of various atomic arrangements give information about crystalline solids. The unit cell also gives information about how atoms pack together. The packing fraction is a measure of how much of the 2D area of the unit cell is filled up by atoms:

    Packing fraction = (area occupied by atoms)/(total area available)

    In 3D space, there are only 7 distinct unit cells that can fill all of space with no voids. Bravais figured out how to arrange atoms into unit cells; it turns out there are only 14 unique Bravais lattices. In 3.091, we’ll focus on cubic unit cells, which have all sides the same length and all angles = 90 degrees. There are three distinct cubic unit cells.

    Screen Shot 2022-09-04 at 10.32.58 PM.png

    The top row indicates the position of atom centers for each of the cubic lattices. Next, the atoms are shown as filling up the maximum amount of volume: this helps to visualize how many atoms are in each unit cell. Finally, each of the unit cells are shown in the context of a bigger section of a crystal. Several key metrics help determine the macroscopic properties of a crystal, including the atomic packing fraction (APF) and the coordination number. The atomic packing fraction is calculated in 3D space:

    APF=( atoms in unit cell)*(volume of atom)/(volume of unit cell)

    The volume of an atom can be approximated as the volume of a hard sphere: \(V=4 / 3 * \pi * r^3\). For each of the cubic lattices, the radius of the appropriate hard sphere can be related to the lattice constant (width of the unit cell). Finally, the coordination number, or the number of nearest neighbors, gives information about how many atoms are available to form bonds. The number of atoms in the unit cell is the sum of the parts of atoms shared between neighboring unit cells. For example, the body-centered cubic (BCC) unit cell has a full atom at the center and also an atom at each corner that is shared equally between 8 cells. Therefore, the total number of atoms in a BCC unit cell is 8*1/8+1=2 atoms/unit cell.

    Why this matters

    Screen Shot 2022-09-04 at 10.35.12 PM.png

    Let’s go back to an application of semiconductors that I mentioned a few lectures ago but didn’t delve into. It’s about using light energy to excite electrons above the bandgap, and the fact that this means you can harness light as a form of electricity with the right materials. That’s called a solar cell, or a photovoltaic device. Here’s the energy we receive on planet earth from the sun. We saw this plot when we gave the example of the ozone molecule as a key reason we don’t receive much UV on the planet (since ozone absorbs it high up in the atmosphere). This time, I’m using the plot to show that not only does the intensity strongly depend on the wavelength of light, but much of the energy we receive comes either in the visible or the infrared parts of the spectrum. Naively, you may think the best solar cell would simply absorb everything; in the language of our bandgap and semiconductor lectures, that means it would have a very small bandgap, since the smaller the gap, the lower the energy of the photon can be to excite an electron above the gap. In a solar cell, once an electron is excited above the gap, that negative charge then gets extracted out of the device from the conduction band, while the positive hole gets extracted from the valence band. Those extracted charges can then do work (like charge your phone).

    Screen Shot 2022-09-04 at 10.35.48 PM.png

    Ah, but were it only so easy! The challenge is that once the electron is excited it very quickly “thermalizes” down all the way to the bottom of the conduction band. It’s a process that happens so quickly that there’s very little we can do about it (although there is an active area of research to extract “hot electrons” out of materials, before they can thermalize, but no working devices at this point). This thermalization means that the amount of energy the electron can have when it is ready to do work, no matter how high in energy the photon was that excited it, is equal to the energy of the band gap. That’s shown here in this energy diagram as the blue squiggly line. A photon of this blue energy would be absorbed by the material, and the excited electron wouldn’t lose any energy since it didn’t go higher than the bandgap. But if a higher frequency light is absorbed, like that purple squiggle, then the photon still creates an excited electron in the material, although any excess energy above the bandgap is lost, since the electron quickly drops down to the conduction band minimum before it can be extracted.

    On the other hand, if the photon that comes in has an energy lower than the bandgap, say that green squiggle in the diagram, then it won’t be absorbed at all (no states are present to be excited into). And here is why understanding the chemistry of solar cells is so important, for we have competing drivers: the lower the gap, the more of that spectrum can be absorbed, but the less energy the electron has to do work and the more energy lost to thermalization, while the higher the gap the more energy the electron has in that CBM, but fewer photons can be absorbed. This is because, as we learned, the semiconductor can only absorb light equal or greater in energy to its band gap. This trade-off means there’s an optimal bandgap for a solar cell made from one single material, and that optimal gap is right around 1.3 \(\mathrm{eV}\). That’s quite close to the bandgap of silicon, which is 1.1\(\mathrm{eV}\).

    Yes, but that is only if we have a perfect crystal! In fact, silicon could have a much lower bandgap, depending on how good of a crystal it is. If it has defects (these are interruptions in the lattice and stay tuned, that topic is coming soon!), then the bandgap can go way down, with the limit being that silicon could even become metallic if the crystallinity is messed up too much. This explains part of the reason why silicon has been expensive and also has to go onto glass as opposed to plastic substrates: namely, because a silicon solar cell has to be made very thick (100’s of microns).

    The truth of the matter is that silicon is a terrible light absorber. That’s right, the solar cell material that dominates the world market at \(> 85\%\) is horrible at doing one of the most important things it needs to do: absorb sunlight. Silicon is still is capable of absorbing, of course, but it’s just inefficient so it just has to be made very thick in order to capture all of the light. Compare that to other materials that can absorb light efficiently like a dye molecule, which could absorb all the sun’s energy in 1000x thinner layers. The thickness of silicon is a double-whammy: first, it means the solar cell is brittle, cannot bend, and must go on glass which makes the whole device heavy, and 2) it means the excited charges have a much further distance to travel in the material before they can reach an electrode and be extracted for work. And that means high cost, since it’s a lot of material that needs to be made as perfectly crystalline as possible. It can be (almost) done, but it’s difficult, expensive, and requires huge fabrication plants. But getting that quality, that crystallinity over large distances (yes, microns are very large distances for an electron or an atom!), is worth it because crystallinity holds the key to the semiconductor properties. In this case, it’s the electron transport we’re talking about, and the more ordered and crystalline the lattice is, the better for the charges to move around.

    Why this employs

    Since this lecture is all about crystallinity, let’s take a look at the employment opportunities in making crystals. It is true that you can dig up rocks from the ground and they may already have crystalline order in them, like the inside of this rock shown here. But most of the time these crystals are useless for technology since they contain all sorts of orientations all jumbled up together, they often have a lot of defects in them, and also they’re usually not pure elements but rather minerals containing the element of interest along with impurities (like oxygen). So the question is: how does one make as pure of a crystal as possible out of a single element? Let’s take silicon as an example. 

    Screen Shot 2022-09-04 at 10.38.06 PM.png

    Here’s a figure of all the steps that it takes to turn sand into crystalline silicon. As you can see, there are a lot of them, and many involve high temperatures or chemical mixing. A lot of companies operate at different stages of this chart. For example, getting from sand to “metallurgical grade silicon” is something a lot of smaller companies do, like Mississippi Silicon, Elkem, or Silicon Materials, Inc., to name only a few. These are often companies with a few hundred employees so they may not have a large number of job postings, but there are many companies so if you do a deep dive into the full landscape of silicon producers you’ll likely find some good opportunities.

    Once a company has made high enough purity silicon, they can then mold it into an ingot. This also requires further purification and also again high temperatures. Most of the companies that make high-purity ingots are currently based in China, although there are others scattered around the world including in places like Japan, Germany, and Korea. And once there’s an ingot, it can be sliced into wafers that can then be made into devices. Here we have a lot of large companies specializing in this process, like Applied Materials or Lam Research, among many others.

    Example problems

    1. Calculate the atomic packing fraction (\(\mathrm{APF}\)) of a simple cubic unit cell.

    Answer

    Volume of unit cell \(=a^3=8 r^3\) (because lattice constant, \(\mathrm{a}=2 \mathrm{r}\))

    Volume of one atom \(=r^3\)

    of atoms in simple cubic unit cell \(=1\)

    Lattice constant, \(\mathrm{a}=2 \mathrm{r}\)

    \(\mathrm{APF}=(\) volume of one atom ) /( volume of the unit cell ) \(\mathrm{x}\) (number of atoms in a cell)

    \[\dfrac{\frac{4}{3} \pi r^3}{8 r^3} \times 1=\dfrac{\pi}{6}=52.4 \% \nonumber\]

    2. Which of the three cubic structures do you think is more likely to form at high pressure? Why?

    Answer

    FCC is more likely than BCC which is more likely than SC. Higher packing fraction means atoms are closer together

    3. \(\mathrm{CsCl}\) forms an interpenetrating SC lattice, which looks like a BCC lattice but with 2 different types of atoms. For this problem, we'll use a BCC lattice and assume that the \(\mathrm{Cl}-\) ions are at the corners.

    Screen Shot 2022-09-04 at 10.40.42 PM.png

    a) How many of each type of atom are in the unit cell?

    Answer

    \(1 \mathrm{Cl}-\) ion and \(1 \mathrm{Cs}+\) ion

    b) What is the lattice constant, a (the length of one side of the unit cell), in terms of the atomic radii \(r_{C s}\) and \(r_{C l}\)? What is the value of a in \(\AA\)?

    Answer

    Close packed direction = body diagonal

    \begin{gathered}
    \sqrt{3} a=2 r_{C s}+2 r_{C l} \\
    \sqrt{3} a=2(2.38 A)+2(1.00 A)=3.90 A
    \end{gathered}

    Lecture 19: Slicing a Crystal: the Miller Planes

    Summary

    A crystal lattice is a map that indicates how identical points are arranged in space. In the last lecture, we placed a single atom at each lattice site, and from there we could calculate atomic packing fraction. However, anything could be placed at the lattice sites, as long as the same thing is put at every lattice site. The lattice is “how” to repeat; “what” to repeat is called the basis. The unit cell serves as a way to think about where we are in the crystal.

    Screen Shot 2022-09-04 at 10.46.39 PM.png

    For points in the lattice, Cartesian coordinates \((\mathrm{h}, \mathrm{k}, \mathrm{l})\) are used: conventionally, they're scaled by the lattice constant. A direction within the crystal is described by a vector: these vectors always start at the origin, so directions are completely described by the end point of the vector, which lies on a face of the unit cell. The notation is slightly different for crystallographic vectors: the end points are scaled to all be integers, and they're enclosed by brackets with no spaces, [\(\mathrm{hkl}\)]. For example, the vector from \((0,0,0)\) to \((1 / 2,1,0)\) in the diagram is the crystallographic direction [120]. This notation is called Miller Indices. A negative direction is indicated with an overbar: the vector from \((0,0,0)\) to \((-1,0,0)\) is written as \([\overline{1} 00]\). In a cubic system, all directions with the same set of Miller Indices are equivalent, regardless of order: these can be grouped into families of directions: these are indicated with angle brackets, \(\langle h k l\rangle .\)

    Miller indices can also be used to describe crystallographic planes, which can be determined by following 4 steps:

    1. Read off the points at which the plane intercepts the axes in terms of fraction of the unit cell length

    2. Take reciprocals of the intercepts

    3. Divide by the greatest common factor to yield integer values

    4. Enclose in parentheses, with no commas (\(\mathrm{hkl}\))

    Screen Shot 2022-09-04 at 10.49.58 PM.png

    For example, this plane intercepts the \(x\)-axis at \(x=1\), the \(y\)-axis at \(y=1\) and it never crosses the \(z\)-axis, so the \(z\) intercept is \(z=\infty\). The reciprocals of these values are 1,1, and 0, so thus is the (110) plane.

    A family of planes in the cubic system contains all of the planes with the same set of Miller Indices but in any order. Families of planes are indicated by curly brackets, \(\{h k l\}\).

    The packing density in a given plane is calculated as

    Planar packing density = ( atoms in the plane)/(area of the plane) [atoms/area]

    Note that the planar packing density is not the same as the planar packing fraction, which is unitless! Finally, the distance between planes described by the same set of Miller indices, \(\mathrm{d}\), can be determined in terms of the lattice constant, a:

    \(d=\dfrac{a}{\sqrt{\left(h^2+k^2+l^2\right)}}\)

    Why this matters

    Screen Shot 2022-09-04 at 10.58.27 PM.png

    One of the key properties that depends on the density in a plane is the energy to cleave a crystal. This turns out to be extremely important since cleaving is just what it sounds like: cutting, or in other words, breaking. If a crystal can be cleaved in a certain crystallographic direction more easily than another, then that is the direction in which the crystal will break most easily. In fact, this is why crystals are so often “faceted,” even when they’re just dug right out of the ground. The Why This Matters for today isn’t about how a crystal looks, but rather how it breaks. That planar packing density can tell us which plane gives in to a strain put on a crystal, and therefore how the crystal winds up deforming under strain.

    Screen Shot 2022-09-04 at 10.59.09 PM.png

    Take a look at the crystal being pulled here in this picture, before a tensile strain is applied and afterwards. Notice that when it’s stretched, it has those tilted disclike shapes. That’s because it breaks along the Miller planes that have the lowest packing density since those are most weakly bonded. Not only does this inform us as to when a crystalline material will break, but it gives us the atomic-scale picture of how it breaks and what shape it takes as it breaks. Now, a lot of solids may not have large differences in the energy required to break different Miller planes, so this type of phenomenon may not occur when it’s under tensile strain. For many metals the bonds are not so directional and the material can simply deform easily because of that sea of electrons accommodating the stretch (remember that wire drawing example from Lecture 16?). Then there’s the other end of the spectrum, where a material is so anisotropic that in one direction the planar density and bonding type are completely different than in the other directions. Graphite (pictured here) is a beautiful example of this.

    Thin long wires can be made out of graphite, too. In fact, carbon fibers, as they’re called, make up a $1.7B market today, and it’s a market that’s growing strong. If we could just get the cost down by about a factor of 10 or so, then carbon fibers would be able to compete with materials like steel on cost while being stronger, lighter, and more resilient. Just for comparison, today’s carbon fibers are 10x stronger than steel while being 5x lighter! If the cost of carbon fibers could be lowered, then we’d replace everything we could with them because they’re just that good. Carbon fibers are not graphite but they are like graphite, in the sense that they have large \(\mathrm{sp}^2\)-bonded planes of carbon that are aligned together. But in order to give it added strength either the planes are kind of crumpled, or they’re bonded to one other through covalent links. Either way, it’s the ability to engineer the chemistry of the Miller planes that makes all the difference.

    Why this employs

    Let’s keep going with the carbon fibers concept from the Why This Matter section above. As I mentioned, carbon fibers could completely revolutionize structural materials and make so many aspects of our lives lighter, stronger, and more efficient. But a lot of work is yet to be done on the fibers themselves to lower their cost, as well as to integrate them into other products, from fabrics to plastics to concrete to metal. A lot of companies are doing extremely interesting work related to these problems. And yes, it all starts from a solid understanding of those Miller planes and crystallographic directions.

    Hexcel has annual revenue over $2B and their flagship product is carbon fiber and composites. Note that both their name as well as logo are all about those hexagons in graphite! Hexcel is also cool because on their website they directly list student internship opportunities. Another company in this space is Solvay which makes thousands of products, many of which involve crystal planes of graphite. There’s Toray, with its origins in race cars, which lists carbon fibers as one of their 5 core businesses, and there’s Nippon Steel which has a whole subsidiary devoted to Granoc, a light-weight fabric or yarn made from carbon fibers. Its proprietary carbon fiber chemistry, they promise, is, “spinning the future.”

    Example Problems

    1. Give the Miller indices of the shaded plane:

    Screen Shot 2022-09-04 at 11.01.17 PM.png

    Answer

    Screen Shot 2022-09-04 at 11.04.08 PM.png

    2. Draw the following in a cubic unit cell:

    a) (\(\bar{2}10\))

    Answer

    Screen Shot 2022-09-04 at 11.04.36 PM.png

    b)[\(1\bar{2}1\)]

    Answer

    Screen Shot 2022-09-04 at 11.05.22 PM.png

    Lecture 20: X-ray Generation

    Summary

    Atoms are too small to see with visible light, but x-rays have just the right range of wavelengths to image at the atomic scale. Wilhelm Rontgen first discovered x-rays by observing electrons striking a metal target in a cathode ray tube. Electrons were emitted from a heated filament and directed by an electric field inside the tube to very high velocities. When the high-speed electrons strike metal atoms, two kinds of xrays are generated. Continuous x-ray radiation is caused by the deflection of an incoming electron as it interacts with a metal target atom: as the name suggests, the allowed wavelengths for the Bremsstrahlung (which translates to ”braking radiation”) are a continuous spectrum above a lower bound. The lower bound on wavelength (upper bound on energy) is set by the energy of the incoming electron, because the energy of the emitted x-ray cannot be greater.

    Screen Shot 2022-09-04 at 11.06.19 PM.png

    The second type of x-ray that can be generated is a product of the electronic structure of the metal target atoms. When a high-energy electron is incident on a metal atom, it can knock out an electron in the inner core of the target atom, allowing a higher energy core electron to cascade down. If the electron cascades down from the \(n=2\) to the \(n=1\) energy level, the photon that’s emitted is called an alpha photon. These x-rays are discrete: they occur at a single wavelength that corresponds to the difference in energy levels.

    Crystallographers use a slightly different notation for the energy levels within an atom: the ground state (\(n=1\)) is called the \(\mathrm{K}\) shell, \(n=2\) is the \(\mathrm{L}\) shell, and \(n=3\) is the \(\mathrm{M}\) shell. Using this notation, a photon that is emitted via the process of an electron falling from \(n=2\) to \(n=1\) is called a \(\mathrm{K}\)-alpha photon. The energy difference between inner-shell electrons is huge- that’s why the photons that are emitted are x-rays.

    Screen Shot 2022-09-04 at 11.09.38 PM.png

    Heavier elements have bigger differences between the energy levels, so the \(\mathrm{K}\)-alpha wavelength is smaller. It is therefore possible to determine which kind of atom the x-ray was emitted from, just by knowing the wavelength: these x-rays are called characteristic xrays.

    In this plot, the Brehmsstrahlung and the characteristic x-rays are shown together, as they’d be measured. The first two peaks are the \(\mathrm{K}\)-beta (M to K) and \(\mathrm{K}\)-alpha (L to K) lines, since those correspond to the biggest energy transitions within the atom. Then come the \(\mathrm{L}\)-beta (N to L) and \(\mathrm{L}\)-alpha (M to L) lines, which are much smaller energy gaps. The intensity can be changed by changing the number of electrons that are incident on the target. The lower-bound Brehmsstrahlung wavelength can be changed by adjusting the kinetic energy of the incident electrons, though this wouldn’t change the location of the characteristic x-ray peaks at all. The only way to shift the characteristic x-rays is to change the target metal used to generate the x-rays. If the energy of the incident x-rays is too low, though, no characteristic x-rays would appear: the incident electrons must have enough energy to knock out a core electron to kick off the cascade.

    Why this matters

    The discovery of x-rays, and the subsequent ability to fully control what wavelength of x-ray is generated, has had so much impact on modern life that it’s really tough to choose only one example for Why This Matters. But I guess since I have to pick, I may as well go back to the roots of Rontgen himself and one of the very first ways he used his new rays. That would be to take a picture of his wife! More precisely, her hand - it’s the first x-ray picture ever taken and it started a revolution (The black circle on the second finger from the left is her ring, by the way).

    Screen Shot 2022-09-04 at 11.11.32 PM.png

    The key thing about x-rays, which Rontgen noticed right away, is that they are able to pass through soft body tissue like skin and flesh, but they get absorbed by the denser material like bone. This means that if you shine x-rays on a body part, the bones inside cast shadows, which can be captured with a photographic plate. This was an incredible advance for medicine because for the first time a doctor could see inside a living patient without having to cut the patient open. Some of the earliest users of x-rays in medicine were military doctors, who could use them to optimize removal of bullets from a wounded soldier. As doctors saw how useful they could be, x-rays became central to diagnosis and by the 1930s they had led to the specialized field of radiology.

    Screen Shot 2022-09-04 at 11.12.20 PM.png

    Not only are x-rays currently utilized routinely in many aspects of medicine, but the technology of x-ray imaging continues to improve dramatically. Beyond improved contrast in the images, one very recent breakthrough applies techniques originally developed for the Large Hadron Collider in Geneva, which is still the world’s largest particle accelerator at 27 km long. With a combination of advanced shutters, cameras, and software, a completely new type of x-ray imaging is possible, with 3D and color images. Take a look at this very first x-ray picture of teeth, taken in 1896 with a 9-minute exposure time (!). Now look at a modern-day x-ray picture of teeth next to it. That’s an incredible improvement, and even more so when you realize that the picture on the right is fully 3D and a dentist can spin it around to see all angles.

    Screen Shot 2022-09-04 at 11.13.24 PM.png

    Why this employs

    There’s a whole lot of X-ray generation that goes on in the world, and the medical imaging example I gave above in Why This Matters is only one part of it. For example, X-rays are also used as scanners at airports, or as ways to do quality control for many industrial parts, or to analyze the degradation of valuable paintings, or in the field now called “X-ray astronomy,” to name only a few examples. X-rays are also used not only to see an illness like cancer, but also to treat it (“radiation therapy”). Just the medical market for X-rays alone is set to surpass $16.5B globally by 2025. And all these uses for X-rays spells jobs.

    There are many companies out there that make X-ray equipment. Siemens for example sells 6 different types of “digital X-ray equipment,” but that’s just one category – they’ve also got a robotic X-ray machine, as well as two types of CT scanners (yes those also use X-rays), and a fluoroscopy machine which uses X-rays to examine movement. And that’s just one company! Siemens is a very large company, with the subsidiary “Siemens Healthineers” making this type of equipment. 41% of their US workforce of 13K people is in RD, engineering, IT, or tech support. And there are many others: Philips, GE Healthcare, Hitachi Healthcare America, Medtronic, Samsung Medical Devices, Eizo, Xoran Technologies, and United Imaging, Canon Medical Systems, and I’m still just getting started. All of these companies are selling wide ranges of X-ray equipment and they’re all competing with one another for customers, which means they’re constantly trying to improve their technology and differentiate what they’re selling from their competitors. And beyond these large companies, there are many small-scale companies looking to innovate the field. For example the company MARS Bioimaging, based in New Zealand, is commercializing the 3D color X-ray technology that came out of the Hadron Collider.

    And that’s just the equipment side. There are also all the companies working on new software to analyze the images produced by all this X-ray equipment. For example, there are many start-ups applying AI to analyze the X-ray image data and in some cases AI has been shown to perform better than human doctors!

    Example problems

    1. Look at the x-ray spectrum below. You can see that \(\lambda_{\kappa \beta}\) is lower than \(\lambda_{\kappa \alpha}\). What about \(\lambda_{\Lambda \alpha}\)? Draw this peak on the spectrum.

    Screen Shot 2022-09-04 at 11.15.45 PM.png

    Answer

    \begin{aligned}
    E_{\kappa \alpha}&=13.3(6-1)^2\left(\dfrac{1}{1^2}-\dfrac{1}{2^2}\right)=255 \mathrm{eV} \\
    &L_\alpha: n_i=3 ; n_f=3+1=4 \\
    &L_\alpha: n_i=3 ; n_f=3+1=4 \\
    E_{L \alpha}&=13.6(6-7.4)^2\left(\dfrac{1}{2^2}-\dfrac{1}{3^2}\right)=3.7 \mathrm{eV} \\
    &\quad \quad \quad  E_{\kappa \alpha}>E_{L \alpha} \\
    &\quad \quad \quad \lambda_{\kappa \alpha}<\lambda_{L \alpha}
    \end{aligned}

    2. What is the wavelength of \(\kappa_\alpha \mathrm{x}\)-rays produced by a copper source?

    Answer

    \begin{gathered}
    E=\dfrac{h c}{\lambda}=-13.6(Z-1)^2\left(\dfrac{1}{n_f^2}-\dfrac{1}{n_i^2}\right) \\
    Z=29, n_f=1, n_i=2 \lambda=1.54 A
    \end{gathered}

     

    Further Reading

    Lecture 11: Molecular Orbitals

    • Video breaking down Molecular Orbital Theory:

    https://www.youtube.com/watch?v=P21OjJ9lDcs

    • Cool tool for visualizing molecular orbitals of water:

    http://www.bcbp.gu.se/~orjan/qc/h2o/ index.html

     

    Lecture 12: Hybridization in Molecular Orbitals

    • More detailed reading on hybridization:

    https://opentextbc.ca/chemistry/chapter/8-2-hybrid-atomic-orbital

     

    Lecture 13: Intermolecular interactions

    • Reviewing intra- vs. inter-molecular interactions:

    https://www.khanacademy.org/test-prep/ mcat/chemical-processes/covalent-bonds/a/intramolecular-and-intermolecular-forces

    • Hydrogen bonds–essential for life:

    https://www.cbsnews.com/news/nature-up-close-water-and-life-as-we-know

    • How Geckos Stick to Der Waals:

    https://www.sciencemag.org/news/2002/08/how-geckos-stick-der-waals

     

    Lecture 14: Phase diagrams

    • Tracing a phase diagram:

    http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch14/ phase.php

    • More on phase changes, including interactive boiling point example:

    https://courses.lumenlearning. com/boundless-chemistry/chapter/phase-changes/

    • On the thermodynamics of tempering chocolate:

    https://acselementsofchocolate.typepad. com/elements_of_chocolate/TEMPERINGCHOCOLATE.html

     

    Lecture 15: Electronic bands in solids

    • More on band theory:

    http://hyperphysics.phy-astr.gsu.edu/hbase/Solids/band.html

    • More on semiconductor materials:

    https://www.pveducation.org/pvcdrom/pn-junctions/semiconductor-materia

    • Britney Spears’ Guide to Semiconductor Physics (one of the all-time best websites on the Internet):

    http://britneyspears.ac/lasers.htm

     

    Lecture 19: Slicing a crystal: the Miller Planes

    • Step-by-step Miller:

    https://www.youtube.com/watch?v=9-us_oENGoM

    • Beyond 3.091–crystallography and reciprocal space:

    https://www.youtube.com/watch?v=DFFU39A3fPY

     

    Lecture 20: X-ray generation

    • More details on x-ray generation:

    http://xrayweb.chem.ou.edu/notes/xray.html

    • X-rays in medical imaging:

    https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/ xrays


    This page titled 5.2: CHEM ATLAS_2 is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Donald Sadoway (MIT OpenCourseWare) .

    • Was this article helpful?