Categories
Development and Testing Forgotten Reactors History Non-nuclear Testing Nuclear Thermal Systems Test Stands

Timber Wind: America’s Return to Nuclear Thermal Rockets

 Hello, and welcome to Beyond NERVA! Today, we’re continuing to look at the pebble bed nuclear thermal rocket (check out the most recent blog post on the origins of the PBR nuclear thermal rocket here)!

Sorry it took so long to get this out… between the huge amount of time it took just to find the minimal references I was able to get my hands on, the ongoing COVID pandemic, and several IRL challenges, this took me far longer than I wanted – but now it’s here!

Today is special because it is covering one of the cult classics of astronuclear engineering, Project Timber Wind, part of the Strategic Defense Initiative (better known colloquially as “Star Wars”). This was the first time since Project Rover that the US put significant resources into developing a nuclear thermal rocket (NTR). For a number of reasons, Timber Wind has a legendary status among people familiar with NTRs, but isn’t well reported on, and a lot of confusion has built up around the project. It’s also interesting in that it was an incredibly (and according to the US Office of the Inspector General, overly) classified program, which means that there’s still a lot we don’t know about this program 30 years later. However, as one of the most requested topics I hear about, I’m looking forward to sharing what I’ve discovered with you… and honestly I’m kinda blown away with this concept.

Timber Wind was an effort to build a second stage for a booster rocket, to replace the second (and sometimes third) stage of anything from an MX ballistic missile to an Atlas or Delta booster. This could be used for a couple of different purposes: it could be used similarly to an advanced upper stage, increasing the payload capacity of the rocket and the range of orbits that the payload could be placed in; alternatively it could be used to accelerate a kinetic kill vehicle (basically a self-guided orbital bullet) to intercept an incoming enemy intercontinental ballistic missile before it deploys its warheads. Both options were explored, with much of the initial funding coming from the second concept, before the kill vehicle concept was dropped and the slightly more traditional upper stage took precedence.

Initially, I planned on covering both Timber Wind and the Space Nuclear Thermal Propulsion program (which it morphed into) in a single post, but the mission requirements, and even architectures, were too different to incorporate into a single blog post. So, this will end up being a two-parter, with this post focusing on the three early mission proposals for the Department of Defense (DOD) and Strategic Defense Initiative Organization (SDIO): a second stage of an ICBM to launch an anti-booster kinetic kill vehicle, an orbital transfer vehicle (basically a fancy, restartable second stage for a booster), and a multi-megawatt orbital nuclear power plant. The next post will cover when the program became more open, testing became more prevalent, and grander plans were laid out – and some key restrictions on operating parameters eliminated the first and third missions on this list.

Ah, Nomenclature, Let’s Deal with That

So, there’s a couple things to get out of the way before we begin.

The first is the name. If you do a Google/Yandex/etc search for “Timber Wind,” you aren’t going to find much compared to “Timberwind,” but from what I’ve seen in official reporting it should be the other way around. The official name of this program is Project Timber Wind (two words), which according to the information I’ve been able to find is not unusual. The anecdotal evidence I have (and if you know more, please feel free to leave a comment below!) is that for programs classified Top Secret: Special Access (as this was) had a name assigned based on picking two random words via computer, whereas other Top Secret (or Q, or equivalent) programs didn’t necessarily follow this protocol.

However, when I look for information about this program, I constantly see “Timberwind.” not the original “Timber Wind.” I don’t know when this shift happened – it didn’t ever happen with rare exceptions in official documentation, even in the post-cancellation reporting, but somehow public reporting always uses the single word variation. I kinda attribute it to reading typewritten reports when the reader is used to digitally written documents as personal head-canon, but that’s all that explanation is – my guess which makes sense to me.

So there’s a disconnect between what most easily accessible sources use (single word), and the official reporting (two words). I’m going to use the original, because the only reason I’ve gotten as far as I have by being weird about minor details in esoteric reports, so I’m not planning on stopping now (I will tag the single word in the blog, just so people can find this, but that’s as far as I’m going)!

The second is in nuclear reactor geometry definitions.

Having discrete, generally small fuel elements generally falls into two categories: particle beds and pebble beds. Particles are small, pebbles are big, and where the line falls seems to be fuzzy. In modern contexts, the line seems to fall around the 1 cm diameter mark, although finding a formal definition has so far eluded me. However, pebble beds are also a more colloquial term than particle beds in use: a particle bed is a type of pebble bed in common use, but not vice versa.

In this context, both the RBR and Timber Wind are both particle bed reactors, and I’ll call them such, but if a source calls the reactor a pebble bed (which many do), I may end up slipping up and using the term.

OK, nomenclature lesson done. Back to the reactor!

Project Timber Wind: Back to the Future

For those in the know, Timber Wind is legendary. This was the first time after Project Rover that the US put its economic and industrial might behind an NTR program. While there had been programs in nuclear electric propulsion (poorly funded, admittedly, and mostly carried through creative accounting in NASA and the DOE), nuclear thermal propulsion had taken a back seat since 1972, when Project Rover’s continued funding was canceled, along with plans for a crewed Mars mission, a crewed base on the Moon, and a whole lot of other dreams that the Apollo generation grew up on.

There was another difference, as well. Timber Wind wasn’t a NASA program. Despite all the conspiracy theories, the assumptions based on the number of astronauts with military service records, and the number of classified government payloads that NASA has handled, it remains a civilian organization, with the goal of peacefully exploring the solar system in an open and transparent manner. The Department of Defense, on the other hand, is a much more secretive organization, and as such many of the design details of this reactor were more highly classified than is typical in astronuclear engineering as they deal with military systems. However, in recent years, many details have become available on this system, which we’ll cover in brief today – and I will be linking not only my direct sources but all the other information I’ve found below.

Also unlike NTR designs since the earliest days of Rover, Timber Wind was meant to act as a rocket stage during booster flight. Most NTR designs are in-space only: the reactor is launched into a stable, “nuclear-safe” (i.e. a long-term stable orbit with minimal collision risk with other satellites and debris) orbit, then after being mated to the spacecraft is brought to criticality and used for in-space orbital transfers, interplanetary trajectories, and the like. (Interesting aside, this program’s successor seems to be the first time that now-common term was used in American literature on the subject.)

Timber Wind was meant to support the Strategic Defense Initiative (SDI), popularly known as Star Wars. Started in 1983, this extensive program was meant to provide a ballistic missile shield, among other things, for the US, and was given a high priority and funding level for a number of programs. One of these programs, the Boost Phase Intercept vehicle, meant to destroy an intercontinental ballistic missile during the boost phase of the vehicle using a kinetic impactor which would be launched either from the ground or be pre-deployed in space. A kinetic kill vehicle is basically a set of reaction control thrusters designed to guide a small autonomous spacecraft into its target at high velocity and destroy it. They are typically small, very nimble, and limited only by the sensors and autonomous guidance software available for them.

In order to do this, the NTR would need to function as the second stage of a rocket, meaning that while the engine would be fired only after it had entered the lower reaches of space or the upper reaches of the atmosphere (minimizing the radiation risk from the launch), it would still very much be in a sub-orbital flight path at the time, and would have much higher thrust-to-weight ratio requirements as a result.

The engine that was selected was based on a design by James Powell at Brookhaven National Laboratory (BNL) in the late 1970s. He presented the design to Grumman in 1982, and from there it came to the attention of the Strategic Defense Initiative Organization (SDIO), the organization responsible for all SDI activities.

Haslett 1994

SDIO proceeded to break the program up into three phases:

  • Phase I (November 1987 to September 1989): verify that the pebblebed reactor concept would meet the requirements of the upper stage of the Boost Phase Intercept vehicle, including the Preliminary Design Review of both the stage and the whole vehicle (an MX first stage, with the PBR second stage /exceeding Earth escape velocity after being ignited outside the atmosphere)
  • Phase II (September 1989-October 1991 under SDIO, October 1991-January 1994 when it was canceled under the US Air Force, scheduled completion 1999): Perform all tests to support the ground test of a full PBR NTR system in preparation for a flight test, including fuel testing, final design of the reactor, design and construction of testing facilities, etc. Phase II would be completed with the successful ground hot fire test of the PBR NTR, however the program was canceled before the ground test could be conducted.
    • Once the program was transferred to the US Air Force (USAF), the mission envelope expanded from an impactor’s upper stage to a more flexible, on-orbit multi-mission purpose, requiring a design optimization redesign. This is also when NASA became involved in the program.
    • Another change was that the program name shifted from Timber Wind to the Space Nuclear Thermal Propulsion program (SNTP), reflecting both the change in management as well as the change in the mission design requirements.
  • Phase III (never conducted, planned for 2000): Flight test of the SNTP upper stage using an Atlas II launch vehicle to place the NTR into a nuclear-safe orbit. Once on orbit, a number of on-orbit tests would be conducted on the engine, but those were not specified to any degree due to the relatively early cancellation of the program.

While the program offered promise, many factors combined to ensure the program would not be completed. First, the hot fire testing facilities required (two were proposed, one at San Tan and one at the National Nuclear Security Site) would be incredibly expensive to construct, second the Space Exploration Initiative was heavily criticized for reasons of cost (a common problem with early 90’s programs), and third the Clinton administration cut many nuclear research programs in all US federal departments in a very short period of time (the Integral Fast Reactor at Argonne National Laboratory was another program to be cut at about the same time).

The program would be transferred into a combined USAF and NASA program in 1991, and end in 1994 under those auspices, with many successful hurdles overcome, and it remains an attractive design, one which has become a benchmark for pebble bed nuclear thermal rockets, and a favorite of the astronuclear community to speculate what would be possible with this incredible engine.

To understand why it was so attractive, we need to go back to the beginning, in the late 1970s at Brookhaven National Laboratory in the aftermath of the Rotating Fluidized Bed Reactor (RBR, covered in our last post here).

The Beginning of Timber Wind

When we last left particle bed NTRs, the Rotating Fluidized Bed Reactor program had made a lot of progress on many of the fundamental challenges with the concept of a particle bed reactor, but still faced many challenges. However, the team, including Dr. John Powell, were still very enthusiastic about the promise it offered – and conscious of the limitations of the system.

Dr. Powell continued to search for funding for a particle bed reactor (PBR) NTR program, and interest in NTR was growing again in both industry and government circles, but there were no major programs and funding was scarce. In 1982, eight years after the conclusion of the RBR, he had a meeting with executives in the Grumman Corporation, where he made a pitch for the PBR NTR concept. They were interested in the promise of higher specific impulse and greater thrust to weight ratios compared to what had become the legacy NERVA architecture, but there wasn’t really a currently funded niche for the project. However, they remained interested enough to start building a team of contractors willing to work on the concept, in case the US government revived its NTR program. The companies included other major aerospace companies (such as Garrett Corp and Aerojet) and nuclear contractors (such as Babcock and Wilcox), as well as subcontractors for many components.

At the same time, they tried to sell the concept of astronuclear PBR designs to potentially interested organizations: a 1985 briefing to the Air Force Space Division on the possibility of using the PBR as a boost phase interceptor was an early, but major, presentation that would end up being a major part of the initial Timber Wind architecture, and the next year an Air Force Astronautics Laboratory issues a design study contract for a PBR-based Orbital Transfer Vehicle (OTV, a kind of advanced upper stage for an already-existing booster). While neither of these contracts was big enough to do a complete development program, they WERE enough money to continue to advance the design of the PBR, which by now was showing two distinct parts: the boost phase interceptor, and the OTV. There was also a brief flirtation with using the PBR architecture from Timber Wind as a nuclear electric power source, which we’ll examine as well, but this was never particularly well focused on or funded, so remains a footnote in the program.

Reactor Geometry

From Atomic Power in Space, INL 2015

Timber Wind was a static particle bed reactor, in the general form of a cylinder 50 cm long by 50 cm in diameter, using 19 fuel elements to heat the propellant in a folded flow path. Each fuel element was roughly cylindrical with a 6.4 cm diameter, consisting of a cold frit (a perforated cylinder) made of stainless steel and a hot frit made out of zirconium carbide (ZrC, although rhenium – Rh – clad would also meet thermal and reactivity requirements) coated carbon-carbon composite, which held a total of 125 kg (15 liters) of 500 micron diameter spheres of uranium/zirconium carbide fueled fuel particles which were clad in two layers of different carbon compositions followed by ZrC cladding. These would be held in place through mechanical means, rather than centrifugal force like in the RBR, reducing the mass of the system at the (quite significant materially) cost of developing a hot frit to mechanically contain the fuel. This is something we’ll cover more in depth in the next post.

From Atomic Power in Space, INL 2015

The propellant would then pass into a central, truncated cone central void, becoming wider from the spacecraft to the nozzle end. This is called orificing. An interesting challenge with nuclear reactors is the fact that the distribution of energy generation changes based on location within the reactor, called radial/axial power peaking (something that occurs in individual fuel elements both in isolation and in terms of their location in a core as well, part of why refueling a nuclear reactor is an incredibly complex process), and in this case it was dealt with in a number of ways, but one of the primary ones was individually changing the orificing of each fuel element to accommodate the power generation and propellant flow rate of each fuel element.

Along these lines, another advantage of this type of core is the ability to precisely control the amount of fissile fuel in each fuel element along the length of the reactor, and along the radius of the fuel element. Since the fuel particles are so small, and the manufacturing of each would be a small-batch process (even fueling a hundred of these things would only take 1500 liters of volume, with the fissile component of that volume being a small percentage of that), a variety of fuel loading options were inherently available, and adjustments to power distribution were reasonably easy to achieve from reactor to reactor. This homogenizes power distribution in some reactors, and increases local power in other, more specialized reactors (like some types of NTRs), but here an even power distribution along the length of the fuel element is desired. This power leveling is done in virtually every fuel element in every reactor, but is a difficult and complex process with large fuel elements due to the need to change how much U is in each portion of the fuel elements. With a particle bed reactor, on the other hand, the U content doesn’t need to vary inside each individual fuel paritcles, and both fueled and unfueled particles can be added in specific regions of the fuel element to achieve the desired power balance within the element. There was actually a region of unfueled particles on the last cm of the particle bed in each fuel element to maximize the efficiency of power distribution into the propellant, and the level of enrichment for the 235U fuel was varied from 70% to 93.5% throughout the fueled portions. This resulted in an incredibly flat power profile, with a ratio of only 1.01:1 from the peak power density to the average power density.

Since the propellant would pass from the outside of each fuel element to the inside, cooling the reactor was far easier, and lower-mass (or higher efficiency) options for things such as moderator were an option. This is a benefit of what’s called a folded-flow­ propellant path, something that we’ve discussed before in some depth in our post on Dumbo, the first folded flow NTR concept [insert link]. In short, instead of heating the propellant as it passes down the length of the reactor such as in Rover, a folded flow injects the cold propellant laterally into the fuel element, heating it in a very short distance, and ejecting it through a central void in the fuel element. This has the advantage of keeping the vast majority of the reactor very cool, eliminating many of the thermal structural problems that Rover experienced, at the cost of a more complex gasdynamic behavior system. This also allowed for lighter-weight materials to be used in the construction, such as aluminum structural members and pressure vessel, to further reduce the mass of the reactor.

Interestingly, many of these lower-mass options, such as Li7H moderator, were never explored, since the mass of the reactor came in at only about 0.6 tons, a very small number compared to the 10 ton payload, so it just wasn’t seen as a big enough issue to continue working on at that point.

Finally, because of the low (~1 hr) operating time of the reactor, radiation problems were minimized. With a reactor only shielded by propellant, tankage, and the structures of the NTR itself, it’s estimated that the NOTV would subject its payload to a total of 100 Gy of gamma radiation and a neutron fluence of less than 10^14 n/cm^2. Obviously, reducing this for a crewed mission would be necessary, but depending on the robotic mission payload, additional shielding may not be necessary. The residual radiation would also be minimal due to the short burn time, although if the reactor was reused this would grow over time.

In 1987, the estimated cost per unit (not including development and testing was about $4 million, a surprisingly low number, due to the ease of construction, low thermal stresses requiring fewer exotic materials and solutions, and low uranium load requirements.

This design would continue to evolve throughout Timber Wind and into SNTP as mission requirements changed (this description is based on a 1987 paper linked below), and we’ll look at the final design in the next post.

For now, let’s move on to how this reactor would be used.

Nuclear Thermal Kinetic Kill Vehicle

The true break for the project came in the same year: 1987. This is when the SDIO picked the Brookhaven (and now Grumman) concept as their best option for a nuclear-enhanced booster for their proposed ground deployed boost phase interceptor.

I don’t do nuclear weapons coverage, in fact that’s a large part of why I’ve never covered systems like Pluto here, but it is something that I’ve gained some knowledge of through osmosis through interactions with many intelligent and well-educated people on social media and in real life… but this time I’m going to make a slight exception for strategic ballistic missile shield technology, because an NTR powered booster is… extremely rare. I can think of four American proposals that continued to be pursued after the 1950s, one early (apocryphal) Soviet design in the early 1950s, one modern Chinese concept, and that’s it! I get asked about it relatively frequently, and my answer is basically always the same: unless something significant changes, it’s not a great idea, but in certain contexts it may work. I leave it up to the reader to decide if this is a good context. (The list I can think of is the Reactor In-Flight Test, or RIFT, which was the first major casualty of Rover/NERVA cutbacks; Timber Wind; and for private proposals the Liberty Ship nuclear lightbulb booster and the Nuclear Thermal Turbo Rocket single stage to orbit concept).

So, the idea behind boost stage interception is that it targets an intercontinental ballistic missile and destroys the vehicle while it’s still gaining velocity – the earlier the interception that can destroy the vehicle, the better. There were many ideas on how to do this, including high powered lasers, but the simplest idea (in theory, not in execution) was the kinetic impactor: basically a self-guided projectile would hit the very thin fuel or oxidizer tanks of the ICBM, and… boom, no more ICBM. This was especially attractive since, by this time, missiles could carry over a dozen warheads, and this would take care of all of them at once, rather than a terminal phase interceptor, which would have to deal with each warhead individually.

The general idea behind Timber Wind was that a three-stage weapon would be used to deliver a boost-phase kinetic kill vehicle. The original first stage was based on the LGM-118 Peacekeeper (“MX,” or Missile – Experimental) first stage, which had just deployed two years earlier. This solid fueled ICBM first stage normally used a 500,000 lbf (2.2 MN) SR118 solid rocket motor, although it’s not clear if this engine was modified in any way for Timber Wind. The second stage would be the PBNTR Timber Wind stage, which would achieve Earth escape velocity to prevent reactor re-entry, and the third stage was the kinetic kill vehicle (which I have not been able to find information about).

Here’s a recent Lockheed Martin KKV undergoing testing, so you can get an idea of what this “bullet” looks and behaves like: https://www.youtube.com/watch?v=KBMU6l6GsdM

Needless to say, this would be a very interesting launch profile, and one that I have not seen detailed anywhere online. It would also be incredibly challenging to

  1. detect the launch of an ICBM;
  2. counter-launch even as rapid-fire-capable a missile as a Peacekeeper;
  3. provide sufficient guidance to the missile in real-time to guide the entire stack to interception;
  4. go through three staging events (two of which were greater than Earth escape velocity!);
  5. guide the kinetic kill vehicle to the target with sufficient maneuvering capability to intercept the target;
  6. and finally have a reasonably high chance of mission success, which required both the reactor to go flying off into a heliocentric orbit and have the kinetic kill vehicle impact the target booster

all before the second (or third) staging event for the target ICBM (i.e. before warhead deployment).

This presents a number of challenges to the designers: thrust-to-weight ratio is key to a booster stage, something that to this point (and even today) NTRs struggle with – mostly due to shielding requirements for the payload.

There simply isn’t a way to mitigate gamma radiation in particular without high atomic number nuclei to absorb and re-emit these high energy photons enough times that a lighter shielding material can be used to either stop or deflect the great-great-great-great-…-great grand-daughter photons from sensitive payloads, whether crew or electronics. However, electronics are far less sensitive than humans to this sort of irradiation, so right off the bat this program had an advantage over Rover: there weren’t any people on board, so shielding mass could be minimized.

Ed. Note: figuring out shielded T/W ratio in this field is… interesting to say the least. It’s an open question whether reported T/W includes anything but the thrust structure (i.e. no turbopumps and associated hardware, generally called the “power pack” in rocket engineering), much less whether it includes shielding – and the amount of necessary shielding is another complex question which changes with time. Considering the age of many of these studies, and the advances in computational capability to model not only the radiation being emitted from the reactor vessel but the shielding ability of many different materials, every estimate of required shielding must be taken with 2-3 dump trucks of salt!!! Given that shielding is an integral part of the reactor system, this makes pretty much every T/W estimate questionable.

One of the major challenges of the program, apparently, was to ensure that the reactor would not re-enter the atmosphere, meaning that it had to achieve Earth orbit escape velocity, while still able to deploy the third stage kinetic kill vehicle. I’ve been trying to figure out this staging event for a while now, and have come to the conclusion that my orbital mechanics capabilities simply aren’t good enough to assess how difficult this is beyond “exceptionally difficult.”

However, details of this portion of the program were more highly classified than even the already-highly-classified program, and incredibly few details are available about this portion in specific. We do know that by 1991, the beginning of Phase II of Timber Wind, this portion of the program had been de-emphasized, so apparently the program managers also found it either impractical or no longer necessary, focusing instead on the Nuclear Orbital Transfer Vehicle, or NOTV.

PBR-NOTV: Advanced Upper Stage Flexibility

NOTV Mockup, Powell 1987

At the same time as Timber Wind was gaining steam, the OTV concept was going through a major evolution into the PBR-NOTV (Particle Bed Reactor – Nuclear Orbital Transfer Vehicle). This was another interesting concept, and one which played around with many concepts that are often discussed in the astronuclear field (some related to pebble bed reactors, some related to NTRs), but are almost never realized.

The goals were… modest…

  1. ~1000 s isp
  2. multi-meganewton thrust
  3. ~50% payload mass fraction from LEO to GEO
  4. LEO to GEO transfer time measured in hours, burn time measured in minutes
  5. Customizable propellant usage to change thrust level from same reactor (H2, NH3, and mixtures of the two)

These NOTVs were designed to be the second stage of a booster, similar to the KKV concept we discussed above, but rather than deliver a small kinetic impactor and then leave the cislunar system, these would be designed to place payloads into specific orbits (low Earth orbit, or LEO, mid-Earth orbit, or MEO, and geostationary orbit, GEO, as well as polar and retrograde orbits) using rockets which would normally be far too small to achieve these mission goals. Since the reactor and nozzle were quite small, it was envisioned that a variety of launch vehicles could be used as a first stage, and the tanks for the NTR could be adjusted in size to meet both mission requirements and launch vehicle dimensions. By 1987, there was even discussion of launching it in the Space Shuttle cargo bay, since (until it was taken critical) the level of danger to the crew was negligible due to the lack of oxidizer on board (a major problem facing the Shuttle-launched Centaur with its chemical engine).

There were a variety of missions that the NOTV was designed around, including single-use missions which would go to LEO/MEO/GEO, drop off the payload, and then go into a graveyard orbit for disposal, as well as two way space tug missions. The possibility of on-orbit propellant reloading was also discussed, with propellant being stored in an orbiting depot, for longer term missions. While it wasn’t discussed (since there was no military need) the stage could easily have handled interplanetary missions, but those proposals would come only after NASA got involved.

Multiple Propellants: a Novel Solution to Novel Challenges with Novel Complications

In order to achieve these different orbits, and account for many of the orbital mechanical considerations of launching satellites into particular orbits, a novel scheme for adjusting both thrust and specific impulse was devised: use a more flexible propellant scheme than just cryogenic H2. In this case, the proposal was to use NH3, H2, or a combination of the two. It was observed that the most efficient method of using the two-propellant mode was to use the NH3 first, followed by the H2, since thrust is more important earlier in the booster flight model. One paper observed that in a Hohman transfer orbit, the first part of the perigee burn would use ammonia, followed by the hydrogen to finish the burn (and I presume to circularize the orbit at the end).

When pure ammonia was used, the specific impulse of the stage was reduced to only 500 s isp (compared to the 200-300 s for most second stages), but the thrust would double from 10,000 lbs to 20,000 lbs. By the time the gas had passed into the nozzle, it would have effectively completely dissociated into 3H2 + N2.

One of the main advantages of the composite system is that it significantly reduced the propellant volume needed for the NTR, a key consideration for some of the boosters that were being investigated. In both the Shuttle and Titan rockets, center of gravity and NTR+payload length were a concern, as was volume.

Sadly, there was also a significant (5,000 lb) decrease in payload advantage over the Centaur using NH3 instead of pure H2, but the overall thrust budget could be maintained.

There’s quite a few complications to consider in this design: first, hydrogen behaves very differently than ammonia in a turbopump, not only due to density but also due to compressability: while NH3 is minimally compressible, meaning that it can be considered to have a constant volume for a given pressure and temperature while being accelerated by the turbopump, hydrogen is INCREDIBLY compressible, leading to a lot of the difficulties in designing the power pack (turbopumps, turbines, and supporting hardware of a rocket) for a hydrogen system. It is likely (although not explicitly stated) that at least two turbopumps and two turbines would be needed for this scheme, meaning increased system mass.

Next is chemical sensitivities and complications from the different propellants: while NH3 is far less reactive than H2 at the temperatures an NTR operates at, it nevertheless has its own set of behaviors which have to be accounted for in both chemical reactions and thermal behavior. Ammonia is far more opaque to radiation than hydrogen, for instance, so it’ll pick up a lot more energy from the reactor. This in turn will change the thermal reactivity behavior, which might require the reactor to run at a higher power level with NH3 than it would with H2 to maintain reactor equilibrium.

This leads us neatly into the next behavioral difference: NH3 will expand less than H2 when heated to the same temperature, but at these higher temps the molecule itself may (or will) start to dissociate, as the thermal energy in the molecule exceeds the bonding strength between the covalent bonds in the propellant. This means you’ve now got monatomic hydrogen and various partially-deconstructed nitrogen complexes with different masses and densities to deal with – although this dissociation does decrease propellant mass, increasing specific impulse, and none of the constituent atoms are solids so plating material into your reactor won’t be a concern. These gasdynamic differences have many knock-on effects though, including engine orificing.

See how the top end of the fuel element’s central void is so much narrower than the bottom? One of the reasons for this is that the propellant is hotter – and therefore less dense – at the bottom (it’s also because as you travel down the fuel element more and more propellant is being added). This is something you see in prismatic fuel elements as well, but it’s not something I’ve seen depicted well anywhere so I don’t have as handy a diagram to use.

This taper is called “orificing,” and is used to balance the propellant pressure within an NTR. It depends on the thermal capacity of the propellant, how much it expands, and how much pressure is desired at that particular portion of the reactor – and the result of these calculations is different for NH3 and H2! So some compromises would have to be reached in this cases as well.

Finally, the tankage for the propellant is another complex question. The H2 has to be stored at such a lower temperature compared to the NH3 that a common bulkhead between the tanks simply wouldn’t be possible – the hydrogen would freeze the ammonia. This could lead to a failure mode similar to what happened to SpaceX’s Falcon 9 in September 2016, when the helium tanks became super-chilled and then ruptured on the pad leading to the loss of the vehicle. Of course, the details would be different, but the danger is the same. This leads to the necessity for a complex tankage system in addition to the problems with the power pack that we discussed earlier.

All of this leads to a heavier and heavier system, with more compromises overall, and with a variety of reactor architectures being discussed it was time to consolidate the program.

Multi-Megawatt Power: Electricity Generation

While all these studies were going on, other portions of SDIO were also undergoing studies in astronuclear power systems. The primary electric power system was the SP-100, a multi-hundred kilowatt power supply using technology that had evolved out of the SNAP reactor program in the 60s and 70s. While this program was far along in its development, it was over budget, delayed, and simply couldn’t provide enough power for some of the more ambitious projects within SDIO. Because of this, SDIO (briefly) investigated higher power reactors for their more ambitious – and power-hungry – on-orbit systems.

Power generation was something that was often discussed for pebble bed reactors – the same reasons that make the concept phenomenal for nuclear thermal rockets makes a very attractive high temperature gas cooled reactors (HTGR): the high thermal transfer rates reduce the size of the needed reactor, while the pebble bed allows for very high gas flow rates (necessary due to the low thermal capacity of the coolant in an HTGR). To do this, the gas doesn’t go through a nozzle, but instead through a gas turbine – known as the Brayton cycle. This has huge efficiency advantages over thermoelectric generators, the design being used in SP-100, meaning that the same size reactor can generate much more electricity – but this would definitely not be the same size reactor!

The team behind Timber Wind (including the BNL, SNL and B&W teams) started discussing both electric generation and bimodal nuclear thermal and nuclear electric reactor geometry as early as 1986, before SDIO picked up the program. Let’s take a look at the two proposals by the team, starting with the bimodal proposal.

Particle Bed BNTR: A Hybrid of a Hybrid

Powell et al 1987

The bimodal NTR (BNTR) system never gained any traction, despite it being a potentially valuable addition to the NOTV concept. It is likely that the combination of the increased complexity and mass of the BNTR compared to the design that was finally decided on for Timber Wind, but it was interesting to the team, and they figured someone may be interested as well. This design used the same coolant channels for both the propellant and coolant, which in this case was He. This allowed for similar thermal expansion characteristics and ass flow in the coolant compared to the propellant, while minimizing both corrosion and gas escape challenges.

Horn et al 1987

A total of 37 fuel elements, similar to those used on Timber Wind, were placed in a triangular configuration, with zirconium hydride moderator surrounding them, with twelve control rods for reactivity control. Unusually for many power generation systems, this concept used a conbination of low power, closed loop coolant (using He) and a high power open loop system using H2, which would then be vented out into space through a nozzle (this second option was limited to about 30 mins of high power operation before exhausting H2 reserves). A pair of He Brayton turbines and a radiator was integrated into the BNTR structure. The low power system was designed to operate for “years at a time,” producing 555 kWe of power, while the high power system was rated to 100 Mwe in either rapid ramp or burst mode.

Horn et al 1987

However, due to the very preliminary nature of this design very few things are completely fleshed out in the only report on the concept that I’ve been able to find. The images, such as they are, are also disappointingly poor in quality, but provide at least a vague idea of the geometry and layout of the reactor:

Horn et al 1987

Multi-Megawatt Steady State and Burst Reactor Proposal

By 1989, two years into Timber Wind, SDIO wanted a powerful nuclear reactor to provide two different power configurations: a steady state, 10 Mwe reactor with a 1 year full power lifetime, which was also able to provide bursts of up to 500 MW for long enough to power neutral particle beams and free electron lasers. A variety of proposals were made, including an adaptation of Timber Wind’s reactor core, an adaptation of a NERVA A6 type core (the same family of NERVA reactors used in XE-Prime), a Project Pluto-based core, a hybrid NERVA/Pluto core, a larger, pellet fueled reactor, and two rarer types of fuel: a wire core reactor and a foam fueled reactor. This is in addition to both thermionic and metal Rankine power systems.

The designs for a PBR-based reactor, though, were very different than the Timber Wind reactor. While using the same TRISO-type fuel, they bear little resemblance to the initial reactor proposal. Both the open and closed cycle concepts were explored.

However, this concept, while considered promising, was passed over in preference for more mature fuel forms (under different reactor configurations, namely a NERVA-derived gas reactor.

Finding information about this system is… a major challenge, and one that I’m continuing to work on, but considering this is the best summary I’ve been able to find based on over a week’s searching for source material which as far as I can tel is still classified or has never been digitally documented, as unsatisfying a summary as this is I’m going to leave it here for now.

When I come back to nuclear electric concepts. we’ll come back to this study. I’ve got… words… about it, but at the present moment it’s not something I’m comfortable enough to comment on (within my very limited expertise).

Phase I Experiments

The initial portion of Timber Wind, Phase I, wasn’t just a paper study. Due to the lack of experience with PBR reactors, fuel elements, and integrating them into an NTR, a pair of experiments were run to verify that this architecture was actually workable, with more experiments being devised for Phase II.

Sandia NL ACCR, image DOE

The first of these tests was PIPE (Pulse Irradiation of a Particle Bed Fuel Element), a test of the irradiation behavior of the PBR fuel element which was divided into two testing regimes in 1988 and 1989 at Sandia National Laboratory’s Annular Core Research Reactor using fuel elements manufactured by Babcock and Wilcox. While the ACCR prevented the power density of the fuel elements to achieve what was desired for the full PBR, the data indicated that the optimism about the potential power densities was justified. Exhaust temperatures were close to that needed for an NTR, so the program continued to move forward. Sadly, there were some manufacturing and corrosion issues with the fuel elements in PIPE-II, leading to some carbon contamination in the test loop, but this didn’t impact the ability to gather the necessary data or reduce the promise of the system (just created more work for the team at SNL).

A later test, PIPET (Particle Bed Reactor Integral Performance Tester) began undergoing preliminary design reviews at the same time, which would end up consuming a huge amount of time and money while growing more and more important to the later program (more on that in the next post).

The other major test to occur at this time was CX1, or Critical Experiment 1.

Carried out at Sandia National Laboratory, CX1 was a novel configuration of prototypic fuel elements and a non-prototypical moderator to verify the nuclear worth of fuel elements in a reactor environment and then conduct post-irradiation testing. This sort of testing is vitally important to any new fuel element, since the computer modeling used to estimate reactor designs requires experimental data to confirm the required assumptions used in the calculations.

This novel architecture looked nothing like an NTR, since it was a research test-bed. In fact, because it was a low power system there wasn’t much need for many of the support structures a nuclear reactor generally uses. Instead, it used 19 fuel elements placed within polyethylene moderator plugs, which were surrounded by a tank of water for both neutron reflection and moderation. This was used to analyze a host of different characteristics, from prompt neutron production (since the delayed neutron behavior would be dependent on other materials, this wasn’t a major focus of the testing), as well as initial criticality and excess reactivity produced by the fuel elements in this configuration.

CX-1 was the first of two critical experiments carried out using the same facilities in Sandia, and led to further testing configurations, but we’ll discuss those more in the next post.

Phase II: Moving Forward, Moving Up

With the success of the programmatic, computational and basic experiments in Phase I, it was time for the program to focus on a particular mission type, prepare for ground (and eventual flight) testing, and move forward.

This began Phase II of the program, which would continue from the foundation of Phase I until a flight test was able to be flown. By this point, ground testing would be completed, and the program would be in a roughly similar position to NERVA after the XE-Prime test.

Phase II began in 1990 under the SDIO, and would continue under their auspices until October 1991. The design was narrowed further, focusing on the NOTV concept, which was renamed the Orbital Maneuvering Vehicle.

Many decisions were made at this point which I’ll go into more in the next post, but some of the major decisions were:

  1. 40,000 lbf (~175 kN) thrust level
  2. 1000 MWt power level
  3. Hot bleed cycle power pack configuration
  4. T/W of 20:1
  5. Initial isp est of 930 s

While this is a less ambitious reactor, it could be improved as the program matured and certain challenges, especially in materials and reactor dynamics uncertainties, were overcome.

Another critical experiment (CX2) was conducted at Sandia, not only further refining the nuclear properties of the fuel but also demonstrating a unique control system, called a “Peek-A-Boo” scheme. Here, revolving rings made up of aluminum and gadolinium surrounded the central fuel element, and would be rotated to either absorb neutrons or allow them to interact with the other fuel elements. While the test was promising (the worth of the system was $1.81 closed and $5.02 open, both close to calculated values), but this system would not end up being used in the final design.

Changing of the Guard: Timber Wind Falls to Space Nuclear Thermal Propulsion

Even as Timber Wind was being proposed, tensions with the USSR had been falling. By the time it got going in 1987, tensions were at an all-time low, reducing the priority of the SDIO mission. Finally, the Soviet Union fell, eliminating the need for the KKV concept.

At the same time, the program was meeting its goals (for the most part), and showed promise not just for SDIO but for the US Air Force (who were responsible for launching satellites for DOD and intelligence agencies) as well as NASA.

1990 was a major threshold year for the program. After a number of Senate-requested assessments by the Defense Science Board, as well as assessment by NASA, the program was looking like it was finding a new home, one with a less (but still significantly) military-oriented focus, and with a civilian component as well.

The end of Timber Wind would come in 1991. Control of the program would transfer from SDIO to the US Air Force, which would locate the programmatic center of the project at the Phillips Research Laboratory in Albuquerque, NM – a logical choice due to the close proximity of Sandia National Lab where much of the nuclear analysis was taking place, as well as being a major hub of astronuclear research (the TOPAZ International program was being conducted there as well). Additional stakes in the program were given to NASA, which saw the potential of the system for both uncrewed and crewed missions from LEO to the Moon and beyond.

With this, Timber Wind stopped being a thing, and the Space Nuclear Thermal Propulsion program picked up basically exactly where it left off.

The Promise of SNTP

With the demise of Timber Wind, the Space Nuclear Thermal Propulsion program gained steam. Being a wider collaboration between different portions of the US government, both civil and military, gave a lot of advantages, wider funding, and more mission options, but also brought its’ own problems.

In the next post, we’ll look at this program, what its plans, results, and complications were, and what the legacy of this program was.

References and Further Reading

Timber Wind/SNTP General References

Haslett, E. A. “SPACE NUCLEAR THERMAL PROPULSION
PROGRAM FINAL REPORT
https://apps.dtic.mil/dtic/tr/fulltext/u2/a305996.pdf

Orbital Transfer Vehicle

Powell et al, “NUCLEAR PROPULSION SYSTEMS FOR ORBIT TRANSFER BASED ON THE
PARTICLE BED REACTOR” Brookhaven NL 1987 https://www.osti.gov/servlets/purl/6383303

Araj et al, “ULTRA-HIGH TEMPERATURE DIRECT PROPULSION”” Brookhaven NL 1987 https://www.osti.gov/servlets/purl/6430200

Horn et al, “The Use of Nuclear Power for Bimodal Applications in Space,” Brookhaven NL 1987 https://www.osti.gov/servlets/purl/5555461

Multi-Megawatt Power Plant

Powell et al “HIGH POWER DENSITY REACTORS BASED ON
DIRECT COOLED PARTICLE BEDS” Brookhaven NL 1987 https://inis.iaea.org/collection/NCLCollectionStore/_Public/17/078/17078909.pdf

Marshall, A.C “A Review of Gas-Cooled Reactor
Concepts for SDI Applications” Sandia NL 1987 https://www.osti.gov/servlets/purl/5619371

“Atomic Power in Space: a History, chapter 15” https://inl.gov/wp-content/uploads/2017/08/AtomicPowerInSpaceII-AHistory_2015_chapters6-10.pdf

Categories
Development and Testing Forgotten Reactors History Non-nuclear Testing Nuclear Thermal Systems Test Stands

Pebblebed NTRs: Solid Fuel, but Different

Hello, and welcome back to Beyond NERVA!

Today, we’re going to take a break from the closed cycle gas core nuclear thermal rocket (which I’ve been working on constantly since mid-January) to look at one of the most popular designs in modern NTR history: the pebblebed reactor!

This I should have covered between solid and liquid fueled NTRs, honestly, and there’s even a couple types of reactor which MAY be able to be used for NTR between as well – the fluidized and shush fuel reactors – but with the lack of information on liquid fueled reactors online I got a bit zealous.

Beads to Explore the Solar System

Most of the solid fueled NTRs we’ve looked at have been either part of, or heavily influenced by, the Rover and NERVA programs in the US. These types of reactors, also called “prismatic fuel reactors,” use a solid block of fuel of some form, usually tileable, with holes drilled through each fuel element.

The other designs we’ve covered fall into one of two categories, either a bundled fuel element, such as the Russian RD-0410, or a folded flow disc design such as the Dumbo or Tricarbide Disc NTRs.

However, there’s another option which is far more popular for modern American high temperature gas cooled reactor designs: the pebblebed reactor. This is a clever design, which increases the surface area of the fuel by using many small, spherical fuel elements held in a (usually) unfueled structure. The coolant/propellant passes between these beads, picking up the heat as it passes between them.

This has a number of fundamental advantages over the prismatic style fuel elements:

  1. The surface area of the fuel is so much greater than with simple holes drilled in the prismatic fuel elements, increasing thermal transfer efficiency.
  2. Since all types of fuel swell when heated, the density of the packed fuel elements could be adjusted to allow for better thermal expansion behavior within the active region of the reactor.
  3. The fuel elements themselves were reasonably loosely contained within separate structures, allowing for higher temperature containment materials to be used.
  4. The individual elements could be made smaller, allowing for a lower temperature gradient from the inside to the outside of a fuel, reducing the overall thermal stress on each fuel pebble.
  5. In a folded flow design, it was possible to not even have a physical structure along the inside of the annulus if centrifugal force was applied to the fuel element structure (as we saw in the fluid fueled reactor designs), eliminating the need for as many super-high temperature materials in the highest temperature region of the reactor.
  6. Because each bead is individually clad, in the case of an accident during launch, even if the reactor core is breached and a fuel release into the environment occurs, the release of either any radiological components or any other fuel materials into the environment is minimized
  7. Because each bead is relatively small, it is less likely that they will sustain sufficient damage either during mechanical failure of the flight vehicle or impact with the ground that would breach the cladding.

However, there is a complication with this design type as well, since there are many (usually hundreds, sometimes thousands) of individual fuel elements:

  1. Large numbers of fuel beads mean large numbers of fuel beads to manufacture and perform quality control checks on.
  2. Each bead will need to be individually clad, sometimes with multiple barriers for fission product release, hydrogen corrosion, and the like.
  3. While each fuel bead will be individually clad, and so the loss of one or all the fuel will not significantly impact the environment from a radiological perspective in the case of an accident, there is potential for significant geographic dispersal of the fuel in the event of a failure-to-orbit or other accident.

There are a number of different possible flow paths through the fuel elements, but the two most common are either an axial flow, where the propellant passes through a tubular structure packed with the fuel elements, and a folded flow design, where the fuel is in a porous annular structure, with the coolant (usually) passing from the outside of the annulus, through the fuel, and the now-heated coolant exiting through the central void of the annulus. We’ll call these direct flow and folded flow pebblebed fuel elements.

In addition, there are many different possible fuel types, which regulars of this blog will be familiar with by now: oxides, carbides, nitrides, and CERMET are all possible in a pebblebed design, and if differential fissile fuel loading is needed, or gradients in fuel composition (such as using tungsten CERMET in higher temperature portions of the reactor, with beryllium or molybdenum CERMET in lower temperature sections), this can be achieved using individual, internally homogeneous fuel types in the beads, which can be loaded into the fuel support structure at the appropriate time to create the desired gradient.

Just like in “regular” fuel elements, these pebbles need to be clad in a protective coating. There have been many proposals over the years, obviously depending on what type of fissile fuel matrix the fuel uses to ensure thermal expansion and chemical compatibility with the fuel and coolant. Often, multiple layers of different materials are used to ensure structural and chemical integrity of the fuel pellets. Perhaps the best known example of this today is the TRISO fuel element, used in the US Advanced Gas Reactor fuel development program. The TRI-Structural ISOtropic fuel element uses either oxide or carbide fuel in the center, followed by a porous carbon layer, a pyrolitic carbon layer (sort of like graphite, but with some covalent bonds between the carbon sheets), followed by a silicon carbide outer shell for mechanical and fission product retention. Some variations include a burnable poison for reactivity control (the QUADRISO at Argonne), or use different outer layer materials for chemical protection. Several types have been suggested for NTR designs, and we’ll see more of them later.

The (sort of) final significant variable is the size of the pebble. As the pebbles go down in size, the available surface area of the fuel-to-coolant interface increases, but also the amount of available space between the pebbles decreases and the path that the coolant flows through becomes more resistant to higher coolant flow rates. Depending on the operating temperature and pressure, the thermal gradient acceptable in the fuel, the amount of decay heat that you want to have to deal with on shutdown (the bigger the fuel pebble, the more time it will take to cool down), fissile fuel density, clad thickness requirements, and other variables, a final size for the fuel pebbles can be calculated, and will vary to a certain degree between different reactor designs.

Not Just for NTRs: The Electricity Generation Potential of Pebblebed Reactors

Obviously, the majority of the designs for pebblebed reactors are not meant to ever fly in space, they’re mostly meant to operate as high temperature gas cooled reactors on Earth. This type of architecture has been proposed for astronuclear designs as well, although that isn’t the focus of this video.

Furthermore, the pebblebed design lends itself to other cooling methods, such as molten salt, liquid metal, and other heat-carrying fluids, which like the gas would flow through the fuel pellets, pick up the heat produced by the fissioning fuel, and carry it into a power conversion system of whatever design the reactor has integrated into its systems.

Finally, while it’s rare, pebblebed designs were popular for a while with radioisotope power systems. There are a number of reasons for this beyond being able to run a liquid coolant through the fuel (which was done on one occasion that I can think of, and we’ll cover in a future post): in an alpha-emitting radioisotope, such as 238Pu, over time the fuel will generate helium gas – the alpha particles will slow, stop, and become doubly ionized helium nuclei, which will then strip electrons off whatever materials are around and become normal 4He. This gas needs SOMEWHERE to go, which is why just like with a fissile fuel structure there are gas management mechanisms used in radioisotope power source fuel assemblies such as areas of vacuum, pressure relief valves, and the like. In some types of RTG, such as the SNAP-27 RTG used by Apollo, as well as the Multi-Hundred Watt RTG used by Voyager, the fuel was made into spheres, with the gaps in between the spheres (normally used to pass coolant through) are used for the gas expansion volume.

We’ll discuss these ideas more in the future, but I figured it was important to point out here. Let’s get back to the NTRs, and the first (and only major) NTR program to focus on the pebblebed concept: the Project Timberwind and the Space Nuclear Propulsion Program in the 1980s and early 1990s.

The Beginnings of Pebblebed NTRs

The first proposals for a gas cooled pebblebed reactor were from 1944/45, although they were never pursued beyond the concept stage, and a proposal for the “Space Vehicle Propulsion Reactor” was made by Levoy and Newgard at Thikol in 1960, with again no further development. If you can get that paper, I’d love to read it, here’s all I’ve got: “Aero/Space Engineering 19, no. 4, pgs 54-58, April 1960” and ‘AAE Journal, 68, no. 6, pgs. 46-50, June 1960,” and “Engineering 189, pg 755, June 3, 1960.” Sounds like they pushed hard, and for good reason, but at the time a pebblebed reactor was a radical concept for a terrestrial reactor, and getting a prismatic fueled reactor, something far more familiar to nuclear engineers, was a challenge that seemed far simpler and more familiar.

Sadly, while this design may end up have informed the design of its contemporary reactor, it seems like this proposal was never pursued.

Rotating Fluidized Bed Reactor (“Hatch” Reactor) and the Groundwork for Timberwind

Another proposal was made at the same time at Brookhaven National Laboratory, by L.P. Hatch, W.H. Regan, and a name that will continue to come up for the rest of this series, John R. Powell (sorry, can’t find the given names of the other two, even). This relied on very small (100-500 micrometer) fuel, held in a perforated drum to contain the fuel but also allow propellant to be injected into the fuel particles, which was spun at a high rate to provide centrifugal force to the particles and prevent them from escaping.

Now, fluidized beds need a bit of explanation, which I figured was best to put in here since this is not a generalized property of pebblebed reactors. In this reactor (and some others) the pebbles are quite small, and the coolant flow can be quite high. This means that it’s possible – and sometimes desirable – for the pebbles to move through the active zone of the reactor! This type of mobile fuel is called a “fluidized bed” reactor, and comes in several variants, including pebble (solid spheres), slurry (solid particulate suspended in a liquid), and colloid (solid particulate suspended in a gas). The best way to describe the phenomenon is with what is called the point of minimum fluidization, or when the drag forces on the mass of the solid objects from the fluid flow balances with the weight of the bed (keep in mind that life is a specialized form of drag). There’s a number of reasons to do this – in fact, many chemical reactions using a solid and a fluid component use fluidization to ensure maximum mixing of the components. In the case of an NTR, the concern is more to do with achieving as close to thermal equilibrium between the solid fuel and the gaseous propellant as possible, while minimizing the pressure drop between the cold propellant inlet and the hot propellant outlet. For an NTR, the way that the “weight” is applied is through centrifugal force on the fuel. This is a familiar concept to those that read my liquid fueled NTR series, but actually began with the fluidized bed concept.

This is calculated using two different relations between the same variables: the Reynolds number (Re), which determines how turbulent fluid flow is, and the friction coefficient (CD, or coefficient of drag, which deptermines how much force acts on the fuel particles based on fluid interactions with the particles) which can be found plotted below. The plotted lines represent either the Reynolds number or the void fraction ε, which represents the amount of gas present in the volume defined by the presence of fuel particles.

Hendrie 1970

If you don’t follow the technical details of the relationships depicted, that’s more than OK! Basically, the y axis is proportional to the gas turbulence, while the x axis is proportional to the particle diameter, so you can see that for relatively small increases in particle size you can get larger increases in propellant flow rates.

The next proposal for a pebble bed reactor grew directly out of the Hatch reactor, the Rotating Fluidized Bed Reactor for Space Nuclear Propulsion (RBR). From the documentation I’ve been able to find, from the original proposal work continued at a very low level at BNL from the time of the original proposal until 1973, but the only reports I’ve been able to find are from 1971-73 under the RBR name. A rotating fuel structure, with small, 100-500 micrometer spherical particles of uranium-zirconium carbide fuel (the ZrC forming the outer clad and a maximum U content of 10% to maximize thermal limits of the fuel particles), was surrounded by a reflector of either metallic beryllium or BeO (which was preferred as a moderator, but the increased density also increased both reactor mass and manufacturing requirements). Four drums in the reflector would control the reactivity of the engine, and an electric motor would be attached to a porous “squirrel cage” frit, which would rotate to contain the fuel.

Much discussion was had as to the form of uranium used, be it 235U or 233U. In the 235U reactor, the reactor had a cavity length of 25 in (63.5 cm), an inner diameter of 25 in (63.5 cm), and a fuel bed depth when fluidized of 4 in (10.2 cm), with a critical mass of U-ZrC being achieved at 343.5 lbs (155.8 kg) with 9.5% U content. The 233U reactor was smaller, at 23 in (56 cm) cavity length, 20 in (51 cm) bed inner diameter, 3 in (7.62 cm) deep fuel bed with a higher (70%) void fraction, and only 105.6 lbs (47.9 kg) of U-ZrC fuel at a lower (and therefore more temperature-tolerant) 7.5% U loading.

233U was the much preferred fuel in this reactor, with two options being available to the designers: either the decreased fuel loading could be used to form the smaller, higher thrust-to-weight ratio engine described above, or the reactor could remain at the dimensions of the 235U-fueled option, but the temperature could be increased to improve the specific impulse of the engine.

There was als a trade-off between the size of the fuel particles and the thermal efficiency of the reactor,:

  • Smaller particles advantages
    • Higher surface area, and therefore better thermal transfer capabilities,
    • Smaller radius reduces thermal stresses on fuel
  • Smaller particles disadvantages
    • Fluidized particle bed fuel loss would be a more immediate concern
    • More sensitive to fluid dynamic behavior in the bed
    • Bubbles could more easily form in fuel
    • Higher centrifugal force required for fuel containment
  • Larger particle advantages
    • Ease of manufacture
    • Lower centrifugal force requirements for a given propellant flow rate
  • Larger particle disadvantages
    • Higher thermal gradient and stresses in fuel pellets
    • Less surface area, so lower thermal transfer efficiency

It would require testing to determine the best fuel particle size, which could largely be done through cold flow testing.

These studies looked at cold flow testing in depth. While this is something that I’ve usually skipped over in my reporting on NTR development, it’s a crucial type of testing in any gas cooled reactor, and even more so in a fluidized bed NTR, so let’s take a look at what it’s like in a pebblebed reactor: the equipment, the data collection, and how the data modified the reactor design over time.

Cold flow testing is usually the predecessor to electrically heated flow testing in an NTR. These tests determine a number of things, including areas within the reactor that may end up with stagnant propellant (not a good thing), undesired turbulence, and other negative consequences to the flow of gas through the reactor. They are preliminary tests, since as the propellant heats up while going through the reactor, a couple major things will change: first, the density of the gas will decrease and second, as the density changes the Reynolds number (a measure of self-interaction, viscosity, and turbulent vs laminar flow behavior) will change.

In this case, the cold flow tests were especially useful, since one of the biggest considerations in this reactor type is how the gas and fuel interact.

The first consideration that needed to be examined is the pressure drop across the fuel bed – the highest pressure point in the system is always the turbopump, and the pressure will decrease from that point throughout the system due to friction with the pipes carrying propellant, heating effects, and a host of other inefficiencies. One of the biggest questions initially in this design was how much pressure would be lost from the frit (the outer containment structure and propellant injection system into the fuel) to the central void in the body of the fuel, where it exits the nozzle. Happily, this pressure drop is minimal: according to initial testing in the early 1960s (more on that below), the pressure drop was equal to the weight of the fuel bed.

The next consideration was the range between fluidizing the fuel and losing the fuel through literally blowing it out the nozzle – otherwise known as entrainment, a problem we looked at extensively on a per-molecule basis in the liquid fueled NTR posts (since that was the major problem with all those designs). Initial calculations and some basic experiments were able to map the propellant flow rate and centrifugal force required to both get the benefit of a fluidized bed and prevent fuel loss.

Rotating Fluidized Bed Reactor testbed test showing bubble formation,

Another concern is the formation of bubbles in the fuel body. As we covered in the bubbler LNTR post (which you can find here), bubbles are a problem in any fuel type, but in a fluid fueled reactor with coolant passing through it there’s special challenges. In this case, the main method of transferring heat from the fuel to the propellant is convection (i.e. contact between the fuel and the propellant causing vortices in the gas which distributes the heat), so an area that doesn’t have any (or minimal) fuel particles in it will not get heated as thoroughly. That’s a headache not only because the overall propellant temperature drops (proportional to the size of the bubbles), but it also changes the power distribution in the reactor (the bubbles are fission blank spots).

Finally, the initial experiment set looked at the particle-to-fluid thermal transfer coefficients. These tests were far from ideal, using a 1 g system rather than the much higher planned centrifugal forces, but they did give some initial numbers.

The first round of tests was done at Brookhaven National Laboratory (BNL) from 1962 to 1966, using a relatively simple test facility. A small, 10” (25.4 cm) length by 1” (2.54 cm) diameter centrifuge was installed, with gas pressure provided by a pressurized liquefied air system. 138 to 3450 grams of glass particles were loaded into the centrifuge, and various rotational velocities and gas pressures were used to test the basic behavior of the particles under both centrifugal force and gas pressure. While some bobbles were observed, the fuel beds remained stable and no fuel particles were lost during testing, a promising beginning.

These tests provided not just initial thermal transfer estimates, pressure drop calculations, and fuel bed behavioral information, but also informed the design of a new, larger test rig, this one 10 in by 10 in (25.4 by 25.4 cm), which was begun in 1966. This system would not only have a larger centrifuge, but would also use liquid nitrogen rather than liquefied air, be able to test different fuel particle simulants rather than just relatively lightweight glass, and provide much more detailed data. Sadly, the program ran out of funding later that year, and the partially completed test rig was mothballed.

Rotating Fluidized Bed Reactor (RBR): New Life for the Hatch Reactor

It would take until 1970, when the Space Nuclear Systems office of the Atomic Energy Commission and NASA provided additional funding to complete the test stand and conduct a series of experiments on particle behavior, reactor dynamics and optimization, and other analytical studies of a potential advanced pebblebed NTR.

The First Year: June 1970-June 1971

After completing the test stand, the team at BNL began a series of tests with this larger, more capable equipment in Building 835. The first, most obvious difference is the diameter of the centrifuge, which was upgraded from 1 inch to 10 inches (25.4 cm), allowing for a more prototypical fuel bed depth. This was made out of perforated aluminum, held in a stainless steel pressure housing for feeding the pressurized gas through the fuel bed. In addition, the gas system was changed from the pressurized air system to one designed to operate on nitrogen, which was stored in liquid form in trailers outside the building for ease of refilling (and safety), then pre-vaporized and held in two other, high-pressure trailers.

Photographs were used to record fluidization behavior, taken viewing the bottom of the bed from underneath the apparatus. While initially photos were only able to be taken 5 seconds apart, later upgrades would improve this over the course of the program.

The other major piece of instrumentation surrounded the pressure and flow rate of the nitrogen gas throughout the system. The gas was introduced at a known pressure through two inlets into the primary steel body of the test stand, with measurements of upstream pressure, cylindrical cavity pressure outside the frit, and finally a pitot tube to measure pressure inside the central void of the centrifuge.

Three main areas of pressure drop were of interest: due to the perforated frit itself, the passage of the gas through the fuel bed, and finally from the surface of the bed and into the central void of the centrifuge, all of which needed to be measured accurately, requiring calibration of not only the sensors but also known losses unique to the test stand itself.

The tests themselves were undertaken with a range of glass particle sizes from 100 to 500 micrometers in diameter, similar to the earlier tests, as well as 500 micrometer copper particles to more closely replicate the density of the U-ZrC fuel. Rotation rates of between1,000 and 2,000 rpm, and gas flow rates from 1,340-1,800 scf/m (38-51 m^3/min) were used with the glass beads, and from 700-1,500 rpm with the copper particles (the lower rotation rate was due to gas pressure feed limitations preventing the bed from becoming fully fluidized with the more massive particles).

Finally, there were a series of physics and mechanical engineering design calculations that were carried out to continue to develop the nuclear engineering, mechanical design, and system optimization of the final RBR.

The results from the initial testing were promising: much of the testing was focused on getting the new test stand commissioned and calibrated, with a focus on figuring out how to both use the rig as it was constructed as well as which parts (such as the photography setup) could be improved in the next fiscal year of testing. However, particle dynamics in the fuidized bed were comfortably within stable, expected behavior, and while there were interesting findings as to the variation in pressure drop along the axis of the central void, this was something that could be worked with.

Based on the calculations performed, as well as the experiments carried out in the first year of the program, a range of engines were determined for both 233U and 235U variants:

Work Continues: 1971-1972

This led directly into the 1971-72 series of experiments and calculations. Now that the test stand had been mostly completed (although modifications would continue), and the behavior of the test stand was now well-understood, more focused experimentation could continue, and the calculations of the physics and engineering considerations in the reactor and engine system could be advanced on a more firm footing.

One major change in this year’s design choices was the shift toward a low-thrust, high-isp system, in part due to greater interest at NASA and the AEC in a smaller NTR than the original design envelope. While analyzing the proposed engine size above, though, it was discovered that the smallest two reactors were simply not practical, meaning that the smallest design was over 1 GW power level.

Another thing that was emphasized during this period from the optimization side of the program was the mass of the reflector. Since the low thrust option was now the main thrust of the design, any increase in the mass of the reactor system has a larger impact on the thrust-to-weight ratio, but reducing the reflector thickness also increases the neutron leakage rate. In order to prevent this, a narrower nozzle throat is preferred, but also increases thermal loading across the throat itself, meaning that additional cooling, and probably more mass, is needed – especially in a high-specific-impulse (aka high temperature) system. This also has the effect of needing higher chamber pressures to maintain the desired thrust level (a narrower throat with the same mass flow throughput means that the pressure in the central void has to be higher).

These changes required a redesign of the reactor itself, with a new critical configuration:

Hendrie 1972

One major change is how fluidized the bed actually is during operation. In order to get full fluidization, there needs to be enough inward (“upward” in terms of force vectors) velocity at the inner surface of the fuel body to lift the fuel particles without losing them out the nozzle. During calculations in both the first and second years, two major subsystems contributed hugely to the weight and were very dependent on both the rotational speed and the pellet size/mass: the weight of the frit and motor system, which holds the fuel particles, and the weight of the nozzle, which not only forms the outlet-end containment structure for the fuel but also (through the challenges of rocket motor dynamics) is linked to the chamber pressure of the reactor – oh, and the narrower the nozzle, the less surface area is available to reject the heat from the propellant, so the harder it is to keep cool enough that it doesn’t melt.

Now, fluidization isn’t a binary system: a pebblebed reactor is able to be settled (no fluidization), partially fluidized (usually expressed as a percentage of the pebblebed being fluidized), and fully fluidized to varying degrees (usually expressed as a percentage of the volume occupied by the pebbles being composed of the fluid). So there’s a huge range, from fully settled to >95% fluid in a fully fluidized bed.

The designers of the RBR weren’t going for excess fluidization: at some point, the designer faces diminishing returns on the complications required for increased fluid flow to maintain that level of particulate (I’m sure it’s the same, with different criteria, in the chemical industry, where most fluidized beds actually are used), both due to the complications of having more powerful turbopumps for the hydrogen as well as the loss of thermalization of that hydrogen because there’s simply too much propellant to be heated fully – not to mention fuel loss from the particulate fuel being blown out of the nozzle – so the calculations for the bed dynamics assumed minimal full fluidization (i.e. when all the pebbles are moving in the reactor) as the maximum flow rate – somewhere around 70% gas in the fuel volume (that number was never specifically defined that I found in the source documentation, if it was, please let me know), but is dependent on both the pressure drop in the reactor (which is related to the mass of the particle bed) and the gas flow.

Ludewig 1974

However, the designers at this point decided that full fluidization wasn’t actually necessary – and in fact was detrimental – to this particular NTR design. Because of the dynamics of the design, the first particles to be fluidized were on the inner surface of the fuel bed, and as the fluidization percentage increased, the pebbles further toward the outer circumference became fluidized. Because the temperature difference between the fuel and the propellant is greater as the propellant is being injected through the frit and into the fuel body, more heat is carried away by the propellant per unit mass, and as the propellant warms up, thermal transfer becomes less efficient (the temperature difference between two different objects is one of the major variables in how much energy is transferred for a given surface area), and fluidization increases that efficiency between a solid and a fluid.

Because of this, the engineers re-thought what “minimal fluidization” actually meant. If the bed could be fluidized enough to maximize the benefit of that dynamic, while at a minimum level of fluidization to minimize the volume the pebblebed actually took up in the reactor, there would be a few key benefits:

  1. The fueled volume of the reactor could be smaller, meaning that the nozzle could be wider, so they could have lower chamber pressure and also more surface area for active cooling of the nozzle
  2. The amount of propellant flow could be lower, meaning that turbopump assemblies could be smaller and lighter weight
  3. The frit could be made less robustly, saving on weight and simplifying the challenges of the bearings for the frit assembly
  4. The nozzle, frit, and motor/drive assembly for the frit are all net neutron poisons in the RBR, meaning that minimizing any of these structures’ overall mass improves the neutron economy in the reactor, leading to either a lower mass reactor or a lower U mass fraction in the fuel (as we discussed in the 233U vs. 235U design trade-off)

After going through the various options, the designers decided to go with a partially fluidized bed. At this point in the design evolution, they decided on having about 50% of the bed by mass being fluidized, with the rest being settled (there’s a transition point in the fuel body where partial fluidization is occurring, and they discuss the challenges of modeling that portion in terms of the dynamics of the system briefly). This maximizes the benefit at the circumference, where the thermal difference (and therefore the thermal exchange between the fuel and the propellant) is most efficient, while also thermalizing the propellant as much as possible as the temperature difference decreases from the propellant becoming increasingly hotter. They still managed to reach an impressive 2400 K propellant cavity temperature with this reactor, which makes it one of the hottest (and therefore highest isp) solid core NTR designs proposed at that time.

This has various implications for the reactor, including the density of the fissile component of the fuel (as well as the other solid components that make up the pebbles), the void fraction of the reactor (what part of the reactor is made up of something other than fuel, in this particular instance hydrogen within the fuel), and other components, requiring a reworking of the nuclear modeling for the reactor.

An interesting thing to me in the Annual Progress Report (linked below) is the description of how this new critical configuration was modeled; while this is reasonably common knowledge in nuclear engineers from the days before computational modeling (and even to the present day), I’d never heard someone explain it in the literature before.

Basically, they made a bunch of extremely simplified (in both number of dimensions and fidelity) one-dimensional models of various points in the reactor. They then assumed that they could rotate these around that elevation to make something like an MRI slice of the nuclear behavior in the reactor. Then, they moved far enough away that it was different enough (say, where the frit turns in to the middle of the reactor to hold the fuel, or the nozzle starts, or even the center of the fuel compared to the edge) that the dynamics would change, and did the same sort of one-dimensional model; they would end up doing this 18 times. Then, sort of like an MRI in reverse, they took these models, called “few-group” models, and combined them into a larger group – called a “macro-group” – for calculations that were able to handle the interactions between these different few-group simulations to build up a two-dimensional model of the reactor’s nuclear structure and determine the critical configuration of the reactor. They added a few other ways to subdivide the reactor for modeling, for instance they split the neutron spectrum calculations into fast and thermal, but this is the general shape of how nuclear modeling is done.

Ok, let’s get back to the RBR…

Experimental testing using the rotating pebblebed simulator continued through this fiscal year, with some modifications. A new, seamless frit structure was procured to eliminate some experimental uncertainty, the pressure measuring equipment was used to test more areas of the pressure drop across the system, and a challenge for the experimental team – finding 100 micrometer copper spheres that were regularly enough shaped to provide a useful analogue to the UC-ZrC fuel (Cu specific gravity 8.9, UC-ZrC specific gravity ~6.5) were finally able to be procured.

Additionally, while thermal transfer experiments had been done with the 1-gee small test apparatus which preceded the larger centrifugal setup (with variable gee forces available), the changes were too great to allow for accurate predictions on thermal transfer behavior. Therefore, thermal transfer experiments began to be examined on the new test rig – another expansion of the capabilities of the new system, which was now being used rigorously since its completing and calibration testing of the previous year. While they weren’t conducted that year, setting up an experimental program requires careful analysis of what the test rig is capable of, and how good data accuracy can be achieved given the experimental limitations of the design.

The major achievement for the year’s ex[experimentation was a refining of the relationship between particle size, centrifugal force, and pressure drop of the propellant from the turbopump to the frit inlet to the central cavity, most especially from the frit to the inner cavity through the fuel body, on a wide range of particle sizes, flow rates, and bed fluidization levels, which would be key as the design for the RBR evolved.

The New NTR Design: Mid-Thrust, Small RBR

So, given the priorities at both the AEC and NASA, it was decided that it was best to focus primarily on a given thrust, and try and optimize thrust-to-weight ratios for the reactor around that thrust level, in part because the outlet temperature of the reactor – and therefore the specific impulse – was fixed by the engineering decisions made in regards to the rest of the reactor design. In this case, the target thrust was was 90 kN (20,230 lbf), or about 120% of a Pewee-class engine.

This, of course, constrained the reactor design, which at this point in any reactor’s development is a good thing. Every general concept has a huge variety of options to play with: fuel type (oxide, carbide, nitride, metal, CERMET, etc), fissile component (233U and 235U being the big ones, but 242mAm, 241Cf, and other more exotic options exist), thrust level, physical dimensions, fuel size in the case of a PBR, and more all can be played with to a huge degree, so having a fixed target to work towards in one metric allows a reference point that the rest of the reactor can work around.

Also, having an optimization point to work from is important, in this case thrust-to-weight ratio (T/W). Other options, such as specific impulse, for a target to maximize would lead to a very different reactor design, but at the time T/W was considered the most valuable consideration since one way or another the specific impulse would still be higher than the prismatic core NTRs currently under development as part of the NERVA program (being led by Los Alamos Scientific Laboratory and NASA, undergoing regular hot fire testing at the Jackass Flats, NV facility). Those engines, while promising, were limited by poor T/W ratios, so at the time a major goal for NTR improvement was to increase the T/W ratio of whatever came after – which might have been the RBR, if everything went smoothly.

One of the characteristics that has the biggest impact on the T/W ratio in the RBR is the nozzle throat diameter. The smaller the diameter, the higher the chamber pressure, which reduces the T/W ratio while increasing the amount of volume the fuel body can occupy given the same reactor dimensions – meaning that smaller fuel particles could be used, since there’s less chance that they would be lost out of the narrower nozzle throat. However, by increasing the nozzle throat diameter, the T/W ratio improved (up to a point), and the chamber pressure could be decreased, but at the cost of a larger particle size; this increases the thermal stresses in the fuel particles, and makes it more likely that some of them would fail – not as catastrophic as on a prismatic fueled reactor by any means, but still something to be avoided at all costs. Clearly a compromise would need to be reached.

Here are some tables looking at the design options leading up to the 90 kN engine configuration with both the 233U and 235U fueled versions of the RBR:

After analyzing the various options, a number of lessons were learned:

  1. It was preferable to work from a fixed design point (the 90 kN thrust level), because while the reactor design was flexible, operating near an optimized power level was more workable from a reactor physics and thermal engineering point of view
  2. The main stress points on the design were reflector weight (one of the biggest mass components in the system), throat diameter (from both a mass and active cooling point of view as well as fuel containment), and particle size (from a thermal stress and heat transfer point of view)
  3. On these lower-trust engines, 233U was looking far better than 235U for the fissile component, with a T/W ratio (without radiation shielding) of 65.7 N/kg compared to 33.3 N/kg respectively
    1. As reactor size increased, this difference reduced significantly, but with a constrained thrust level – and therefore reactor power – the difference was quite significant.

The End of the Line: RBR Winds Down

1973 was a bad year in the astronuclear engineering community. The flagship program, NERVA, which was approaching flight ready status with preparations for the XE-PRIME test, the successful testing of the flexible, (relatively) inexpensive Nuclear Furnace about to occur to speed not only prismatic fuel element development but also a variety of other reactor architectures (such as the nuclear lightbulb we began looking at last time), and the establishment of a robust hot fire testing structure at Jackass Flats, was fighting for its’ life – and its’ funding – in the halls of Congress. The national attention, after the success of Apollo 11, was turning away from space, and the missions that made NTR technologically relevant – and a good investment – were disappearing from the mission planners’ “to do” lists, and migrating to “if we only had the money” ideas. The Rotating Fluidized Bed Reactor would be one of those casualties, and wouldn’t even last through the 1971/72 fiscal year.

This doesn’t mean that more work wasn’t done at Brookhaven, far from it! Both analytical and experimental work would continue on the design, with the new focus on the 90 kN thrust level, T/W optimized design discussed above making the effort more focused on the end goal.

Multi-program computational architecture used in 1972/73 for RBR, Hoffman 1973

On the analytical side, many of the components had reasonably good analytical models independently, but they weren’t well integrated. Additionally, new and improved analytical models for things like the turbopump system, system mass, temp and pressure drop in the reactor, and more were developed over the last year, and these were integrated into a unified modeling structure, involving multiple stacked models. For more information, check out the 1971-72 progress report linked in the references section.

The system developed was on the verge of being able to do dynamics modeling of the proposed reactor designs, and plans were laid out for what this proposed dynamic model system would look like, but sadly by the time this idea was mature enough to implement, funding had run out.

On the experimental side, further refinement of the test apparatus was completed. Most importantly, because of the new design requirements, and the limitations of the experiments that had been conducted so far, the test-bed’s nitrogen supply system had to be modified to handle higher gas throughput to handle a much thicker fuel bed than had been experimentally tested. Because of the limited information about multi-gee centrifugal force behavior in a pebblebed, the current experimental data could only be used to inform the experimental course needed for a much thicker fuel bed, as was required by the new design.

Additionally, as was discussed from the previous year, thermal transfer testing in the multi-gee environment was necessary to properly evaluate thermal transfer in this novel reactor configuration, but the traditional methods of thermal transfer simply weren’t an option. Normally, the procedure would be to subject the bed to alternating temperatures of gas: cold gas would be used to chill the pebbles to gas-ambient temperatures, then hot gas would be used on the chilled pebbles until they achieved thermal equilibrium at the new temperature, and then cold gas would be used instead, etc. The temperature of the exit gas, pebbles, and amount of gas (and time) needed to reach equilibrium states would be analyzed, allowing for accurate heat transfer coefficients at a variety of pebble sizes, centrifugal forces, propellant flow rates, etc. would be able to be obtained, but at the same time this is a very energy-intensive process.

An alternative was proposed, which would basically split the reactor’s propellant inlet into two halves, one hot and one cold. Stationary thermocouples placed through the central void in the centrifuge would record variations in the propellant at various points, and the gradient as the pebbles moved from hot to cold gas and back could get good quality data at a much lower energy cost – at the cost of data fidelity reducing in proportion to bed thickness. However, for a cash-strapped program, this was enough to get the data necessary to proceed with the 90 kN design that the RBR program was focused on.

Looking forward, while the team knew that this was the end of the line as far as current funding was concerned, they looked to how their data could be applied most effectively. The dynamics models were ready to be developed on the analytical side, and thermal cycling capability in the centrifugal test-bed would prepare the design for fission-powered testing. The plan was to address the acknowledged limitations with the largely theoretical dynamic model with hot-fired experimental data, which could be used to refine the analytical capabilities: the more the system was constrained, and the more experimental data that was collected, the less variability the analytical methods had to account for.

NASA had proposed a cavity reactor test-bed, which would serve primarily to test the open and closed cycle gas core NTRs also under development at the time, which could theoretically be used to test the RBR as well in a hot-fore configuration due to its unique gas injection system. Sadly, this test-bed never came to be (it was canceled along with most other astronuclear programs), so the faint hope for fission-powered RBR testing in an existing facility died as well.

The Last Gasp for the RBR

The final paper that I was able to find on the Rotating Fluidized Bed Reactor was by Ludewig, Manning, and Raseman of Brookhaven in the Journal of Spacecraft, Vol 11, No 2, in 1974. The work leading up to the Brookhaven program, as well as the Brookhaven program itself, was summarized, and new ideas were thrown out as possibilities as well. It’s evident reading the paper that they still saw the promise in the RBR, and were looking to continue to develop the project under different funding structures.

Other than a brief mention of the possibility of continuous refueling, though, the system largely sits where it was in the middle of 1973, and from what I’ve seen no funding was forthcoming.

While this was undoubtedly a disappointing outcome, as virtually every astronuclear program in history has faced, and the RBR never revived, the concept of a pebblebed NTR would gain new and better-funded interest in the decades to come.

This program, which has its own complex history, will be the subject for our next blog post: Project Timberwind and the Space Nuclear Thermal Propulsion program.

Conclusion

While the RBR was no more, the idea of a pebblebed NTR would live on, as I mentioned above. With a new, physically demanding job, finishing up moving, and the impacts of everything going on in the world right now, I’m not sure exactly when the next blog post is going to come out, but I have already started it, and it should hopefully be coming in relatively short order! After covering Timberwind, we’ll look at MITEE (the whole reason I’m going down this pebblebed rabbit hole, not that the digging hasn’t been fascinating!), before returning to the closed cycle gas core NTR series (which is already over 50 pages long!).

As ever, I’d like to thank my Patrons on Patreon (www.patreon.com/beyondnerva), especially in these incredibly financially difficult times. I definitely would have far more motivation challenges now than I would have without their support! They get early access to blog posts, 3d modeling work that I’m still moving forward on for an eventual YouTube channel, exclusive content, and more. If you’re financially able, consider becoming a Patron!

You can also follow me at https://twitter.com/BeyondNerva for more regular updates!

References

Rotating Fluidized Bed Reactor

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June, 1970- June, 1971,” Brookhaven NL, August 1971 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19720017961.pdf

Hendrie et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, June 1971 – June 1972,” Brookhaven NL, Sept. 1972 https://inis.iaea.org/collection/NCLCollectionStore/_Public/04/061/4061469.pdf

Hoffman et al, “ROTATING FLUIDIZED BED REACTOR FOR SPACE NUCLEAR PROPULSION Annual Report: Design Studies and Experimental Results, July 1972 – January 1973,” Brookhaven NL, Sept 1973 https://inis.iaea.org/collection/NCLCollectionStore/_Public/05/125/5125213.pdf

Cavity Test Reactor

Whitmarsh, Jr, C. “PRELIMINARY NEUTRONIC ANALYSIS OF A CAVITY TEST REACTOR,” NASA Lewis Research Center 1973 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730009949.pdf

Whitmarsh, Jr, C. “NUCLEAR CHARACTERISTICS OF A FISSIONING URANIUM PLASMA TEST REACTOR WITH LIGHT -WATER COOLING,” NASA Lewis Research Center 1973 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730019930.pdf

Categories
Electric propulsion Electrostatic Propulsion Non-nuclear Testing Nuclear Electric Propulsion Test Stands Uncategorized

Electric Propulsion Part 2: Electrostatic Propulsion

Hello, and welcome back to Beyond NERVA! Today, we finish our look at electric propulsion systems by looking at electrostatic propulsion. This is easily the most common form of in-space electric propulsion system, and as we saw in our History of Electric Propulsion post, it’s also the first that was developed.

I apologize about how long it’s taken to get this blog post published. As I’ve mentioned before, electric propulsion is one of my weak subjects, so I’ve been very careful to try to ensure that the information that I’m giving is correct. Another complication came from the fact that I had no idea how complex and varied each type of drive system is. I have glossed over many details in this blog post on many of the systems, but I’ve also included an extensive list of documentation on all of the drive systems I discuss in the post at the end, so if you’re curious about the details of these systems, please check out the published papers on them!

Electrostatic Drives

By far the most common type of electric propulsion today, and the type most likely to be called an “ion thruster,” is electrostatic propulsion. The electrostatic effect was one of the first electrical effects ever formally described, and the first ever observed (lightning is an electrostatic phenomenon, after all). Electrostatics as a general field of study refers to the study of electric charges at rest (hence, “electro-static”). The electrostatic effect is the tendency of objects with a differential charge (one positive, one negative) to attract each other, and with the same charge to repel each other. This occurs when electrons are stripped or added to one material. Some of the earliest scientific experiments involving this effect involved bars of amber and wool – the amber would become negatively ionized, and the wool would be positively ionized, due to the interactions of the very fine hairs of the wool and the crystalline and elemental composition of the amber (for the nitpicky types, this is known as the triboelectric effect, but is still a manifestation of the electrostatic effect). Other experimenters during the 18th and 19th centuries used cat fur instead of wool, a much more mentally amusing way to build an electrostatic charge. However, we aren’t going to be discussing using a rotating wheel of cats to produce an electric thruster (although, if someone feels like animating said concept, I’d love to see it).

There are a number of designs that use electrostatic effects to produce thrust. Some are very similar to some of the concepts that we discussed in the last post, like the RF ionized thruster (a major area of focus in Japan), the Electron Cyclotron Resonance thrusters (which use the same mechanisms as VASIMR’s acceleration mechanism), and the largely-abandoned Cesium Contact thruster (which has a fair amount of similarities with a pulsed plasma or arcjet thruster). Others, such as the Field Emission Electrostatic Thruster (FEEP) and Ionic Liquid Ion Source thruster (also sometimes called an electrospray) thruster, have far fewer similarities. None of these, though, are nearly as common as the electron bombardment noble gas thruster types: the gridded ion (either electron bombardment, cyclotron resonance, or RF ionization) thruster and the Hall effect thruster (which also has two types: the thruster with anode layer and stationary plasma thruster). The gridded ion thruster, commonly just called an ion thruster, is the propulsion system of choice for interplanetary missions, because it has the highest specific impulse of any currently available propulsion system. Hall effect thrusters have lower specific impulse, but higher thrust, making them a popular choice for reaction control systems on commercial and military satellites.

Most electrostatic drives use an ionization chamber or zone, to strip off electrons from an easily-ionized material. These now-positively charged ions are then accelerated toward a negatively charged structure (or accelerated by an electromagnetic field, in some cases), which is then switched off after accelerating the ions, which then are spat out the back of the thruster. Because of the low density of these ion streams, and the lack of an expanding gas, a physical nozzle isn’t used, because the characteristic bell-shaped de Laval nozzle of chemical or thermal engines is absolutely useless in this case. However, there are many ways that this ion stream can be ionized, and many ways that it can be accelerated, leading to a huge variety of design options within the area of electrostatic propulsion.

Goddard drive drawing
Drawing for first patented electrostatic thruster, Goddard 1917

The first design for a practical electric propulsion system, patented by Robert Goddard in 1917, was an electrostatic device, and  most designs, both in the US and the USSR, have used this concept. In the earliest days of electric propulsion design, each went a different way in the development of this drive concept: the US focused on the technically simpler, but materially more problematic, gridded ion thruster, while the Soviet Union worked to develop the more technically promising, but more difficult to engineer, Hall thruster. Variations of each have been produced over the years, and additional options have been explored as well. These systems have traveled to almost every body of in the Solar System, including Pluto and many of the asteroids in the Main Belt, and provide a lot of the station-keeping thrust for satellites in orbit around Earth. Let’s go ahead and look at what the different types are, what their advantages and disadvantages are, and how they’ve been used in the past.

Gridded Ion Drives

nstar
NSTAR Gridded Ion Thruster, image courtesy NASA

This is the best-known of the electric propulsion thrusters of any type, and is often shortened to “ion drive.” Here, the thruster has four main parts: the propellant supply, an ionization chamber, an array of conductive grids, and a neutralizing beam emitter. The propellant can be anything that is able to be easily ionized, with cesium and mercury being the first options, these have largely been replaced by xenon and argon, though.

Ion drive scematic, NASA
Gridded Ion Drive Schematic, image courtesy NASA

The type of ionization chamber varies widely, and is the main difference in the different types of ion drive. Particle beams, radio frequency or microwave excitation, in addition to magnetic field agitation, are all methods used in different gridded ion drives over the years and across the different manufacturers. The first designs used gaseous agitation to cause electrons to be stripped, but many higher-powered systems use particle (mostly electron) beams, radio frequency or microwave agitation, or cyclotron resonance to strip the electrons off the atoms. The efficiency of the ionization chamber, and its capacity, define how much propellant mass flow is possible, which is one of the main limiting factors for the overall thrust possible for the thruster.

3 grid ion schematic
Schematic of 3 Grid Ion engine, image courtesy ESA

After being ionized, the gas and plasma are then separated, using a negatively charged grid to extract the positively charged ions, leaving the neutral gas in the ionization chamber to be ionized. In most modern designs, this is also the beginning of the acceleration process. Often, two or three grids are used, and the term “ion optics” is often used instead of “grids.” This is because these structures not only extract and change the acceleration of the plasma, but they also shape the beam of the plasma as well. The amount of charge, and the geometry of these grids, defines the exhaust velocity of the ions; and the desired specific impulse produced by the thruster is largely determined by the charge applied to these screens. Many US designs use a more highly charged inner screen to ensure better separation of the ions, and a charge potential difference between this grid and the second accelerates the ions. Because of this, the first grid is often called the extractor, and the second is called the accelerator grid. The charge potential possible on each grid is another major limitator of the possible power level – and therefore the maximum exhaust velocity – of these thrusters.

Wear Pattern on grids CSSA
Idealized wear pattern of a grid. Image Sangregorio et al CSAA 2018

These screens also are one of the main limitators for the thruster’s lifetime, since the ions will impact the grid to a certain degree as they’re flowing past (although the difference in charge potential on the plasma in the ionization chamber between the apertures and the structure of the grid tends to minimize this). With many of the early gridded ion thrusters that used highly reactive materials, chemical interactions in the grids could change the conductivity of these surfaces, cause more rapid erosion, and produce other problems; the transition to noble gas propellants has made this less of an issue. Finally, the geometry of the grids have a huge impact on the direction and velocity of the ions themselves, so there’s a wide variety of options available through the manipulation of this portion of the thruster as well.

At the end of the drive cycle, after the ions are leaving the thruster, a spray of electrons is added to the propellant stream, to prevent the spacecraft from becoming negatively charged over time, and thereby attracting some of the propellant back toward the spacecraft due to the same electrostatic effect that was used to accelerate them in the first place. Problems with incomplete ion stream neutralization were common in early electrostatic thrusters; and with the cesium and mercury propellants used in these thrusters, chemical contamination of the spacecraft became an issue for some missions. Incomplete neutralization is something that is still a concern for some thruster designs, although experiments in the 1970s showed that a spacecraft can ground itself without the ion stream if the differential charge becomes too great. In three grid systems (or four, more on that concept later), the final grid takes the place of this electron beam, and ensures better neutralization of the plasma beam, as well as greater possible exhaust velocity.

Gridded ion thrusters offer very attractive specific impulse, in the range of 1500-4000 seconds, with exhaust velocities up to about 100 km/s for typical designs. The other side of the coin is their low thrust, generally from 20-100 microNewtons (lower than average even for electric propulsion, although their specific impulse is higher than average), which is a mission planning constraint, but isn’t a major show-stopper for many applications. An advanced concept, from the Australian National University and European Space Agency, the Dual Stage 4 Grid (DS4G) thruster, achieved far higher exhaust velocities by using a staged gridded ion thruster, up to 210 km/s.

Past and Current Gridded Ion Thrusters

sert1
SERT 1 Gridded Ion Thruster, inage courtesy NASA

These drive systems have been used on a number of different missions over the years, starting with the SERT missions mentioned in the history of electric propulsion section, and continuing through on an experimental basis until the Deep Space 1 technology demonstration mission – the first spacecraft to use ion propulsion as its main form of propulsion. That same thruster, the NSTAR, is still in use today on the Dawn mission, studying the minor planet Ceres. Hughes Aircraft developed a number of thrusters for station-keeping for their geosynchronous satellite bus (the XIPS thruster).

Hayabusa
Hayabusa probe, image courtesy JAXA

JAXA used this type of drive system for their Hayabusa mission to the asteroid belt, but this thruster used microwaves to ionize the propellant. This thruster operated successfully throughout the mission’s life, and propelled the first spacecraft to return a sample from an asteroid back to Earth.

ESA has used different variations of this thruster on multiple different satellites as well, all of which have been radio frequency ionization types. The ArianeSpace RIT-10 has been used on multiple missions, and the Qinetiq T5 thruster was used successfully on the GOCE mission mapping the Earth’s magnetic field.

NASA certainly hasn’t given up on further developing this technology. The NEXT thruster is three times as powerful in terms of thrust compared to the NSTAR thruster, although it operates on similar principles. The testing regime for this thruster has been completed, demonstrating 4150 s of isp and 236 mN of thrust over a testing life of over 48,000 hours, and it is currently awaiting a mission for it to fly on. This has also been a testbed for using new designs and materials on many of the drive system components, including a new hollow cathode made out of LaB6 (a lanthanum-boron alloy), and several new screen materials.

HiPEP: NASA’s Nuclear Ion Propulsion System

HiPEP Prefire, Foster 2004
HiPEP Being Readied for Test, image courtesy NASA

Another NASA project in gridded ion propulsion, although one that has since been canceled, is far more germane to the specific use of nuclear electric propulsion: the High Power Electric Propulsion drive (HiPEP) for the Jupiter Icy Moons Observer mission. JIMO was a NEP propelled mission to Jupiter which was canceled in 2005, meant to study Europa, Ganymede, and Callisto (this mission will get an in-depth look later in this blog series on NEP). HiPEP used two types of ionization chamber: Electron Cyclotron Resonance ionization, which combines leveraging the small number of free electrons present in any gas by moving them in a circle with the magnetic containment of the ionization chamber with microwaves that are tuned to be in resonance with these moving electrons to more efficiently ionize the xenon gas; and direct current ionization using a hollow cathode to strip off electrons, which has additional problems with cathode failure and so is the less preferred option. Cathode failure of this sort is another major failure point for ion drives, so being able to eliminate this is a significant advantage, but the microwave system ends up consuming more power, so in less-energy-intensive applications it’s often not used.

HiPEP Schematic Foster 2004
HiPEP Schematic with Neutralizer, Foster et al 2004

One very unusual thing about this system is its’ shape: rather than the typical circular discharge chamber and grids, this system uses a rectangular configuration. The designers note that not only does this make the system more compact to stack multiple units together (reducing the structural, propellant, and electrical feed system mass requirements for the full system), it also means that the current density across the grids can be lower for the same electrostatic potential, reducing current erosion in the grids. This means that the grid can support a 100 kg/kW throughput margin for both of the isp configurations that were studied (6000 and 8000 s isp). The longest distance between two supported sections of grid can be reduced as well, preventing issues like thermal deformation, launch vibration damage, and electrostatic attraction between the grids and either the propellant or the back of the ionization chamber itself. The fact that it makes the system more scalable from a structural engineering standpoint is one final benefit to this system.

As the power of the thruster increases, so do the beam neutralization requirements. In this case, up to 9 Amperes of continuous throughput are required, which is very high compared to most systems. This means that the neutralizing beam has to be both powerful and reliable. While the HiPEP team discuss using a common neutralization system for tightly packed thrusters, the baseline design is a fairly typical hollow cathode, similar to what was used on the NSTAR thruster, but with a rectangular cross section rather than a circular one to accommodate the different thruster geometry. Other concepts, like using microwave beam neutralization, were also discussed; however, due to the success and long life of this type of system on NSTAR, the designers felt that this would be the most reliable way to deal with the high throughput requirements that this system requires.

HiPEP DC 34 kW
HiPEP operating at 34 kW, Foster et al 2004

HiPEP consistently met its program guidelines, for both engine thrust efficiency and erosion studies. Testing was conducted at both 2.45 and 5.85 GHz for the microwave ionization system, and was successfully concluded. The 2.45 GHz test, with 16 kW of power, achieved a specific impulse of 4500-5500 seconds, allowing for the higher-powered MW emitter to be used. The 5.85 GHz ionization chamber was tested at multiple current loads, from 9.7 to 39.3 kW, and achieved a maximum specific impulse of 9620 s, and showed a clear increase in thrust of up to close to 800 mN during this test.

Sadly, with the cancellation of JIMO (a program we will continue to come back to frequently as we continue looking at NEP), the need for a high powered gridded ion thruster (and the means of powering it) went away. Much like the fate of NERVA, and almost every nuclear spacecraft ever designed, the canceling of the mission it was meant to be used on spelled the death knell of the drive system. However, HiPEP remains on the books as an attractive, powerful gridded ion drive, for when an NEP spacecraft becomes a reality.

DS4G: Fusion Research-Inspired, High-isp Drives to Travel to the Edge of the Solar System

DS4G Photo
DS4G Thruster, all images Bramanti et al 2006

The Dual Stage 4 Grid (DS4G) ion drive is perhaps the most efficient electric drive system ever proposed, offering specific impulse well over 10,000 seconds. While there are some drive systems that offer higher isp, they’re either rare concepts (like the fission fragment rocket, a concept that we’ll cover in a future post), or have difficulties in the development process (such as Orion derivatives, which run afoul of nuclear weapons test bans and treaty limitations concerning the use of nuclear explosives in space).

 

DS4G Diagram
Cutaway DS4G Diagram, with ionization chamber at the top

So how does this design work? Traditional ion drives use either two grids (like the HiPEP drive) combining the extraction and acceleration stages in these grids and then using a hollow cathode or electron emitter to neutralize the beam, or use three grids, where the third grid is used in place of the hollow cathode. In either case, these are very closely spaced grids, which has its’ advantages, but also a couple of disadvantages: the extraction system and acceleration system being combined forces a compromise between efficiency of extraction and capability of acceleration, and the close spacing limits the acceleration possible of the propellants. The DS4G, as the name implies, does things slightly differently: there are two pairs of grids, each pair is close to its’ partner, but further apart from the other pair, allowing for a greater acceleration chamber length, and therefore higher exhaust velocity, and the distance between the extraction grid and the final acceleration grid allows for each to be better optimized for their individual purposes. As an added benefit, the plasma beam of the propellant is better collimated than that of a traditional ion drive, which means that the drive is able to be more efficient with the mass of the propellant, increasing the specific impulse even further.

DS4G Diagram of Principle
DS4G Concept diagram (above) as compared to a 3-grid ion thruster (bottom)

3 grid ion schematic

This design didn’t come out of nowhere, though. In fact, most tokamak-type fusion reactors use a device very similar to an ion drive to accelerate beams of hydrogen to high velocities, but in order to get through the intense magnetic fields surrounding the reactor the atoms can’t be ionized. This means that a very effective neutralizer needs to be stuck on the back of what’s effectively an ion drive… and these designs all use four screens, rather than three. Dr. David Fearn knew of these devices, and decided to try and adapt it to space propulsion, with the help of ESA, leading to a 2005 test-bed prototype in collaboration with the Australian National University. An RF ionization system was designed for the plasma production unit, and a 35 kV electrical system was designed for the thruster prototype’s ion optics. This was not optimized for in-space use; rather, it was used as a low cost test-bed for optics geometry testing and general troubleshooting of the concept. Another benefit to this design is a higher-than-usual thrust density of 0.86 mN/cm^2, which was seen in the second phase of testing.

Two rounds of highly successful testing were done at ESA’s CORONA test chamber in 2005 and 2006, the results of which can be seen in the tables above. The first test series used a single aperture design, which while highly inefficient was good enough to demonstrate the concept; this was later upgraded to a 37 aperture design. The final test results in 2006 showed impressive specific impulse (14000-14500 s), thrust (2.7 mN), electrical, mass, and total efficiency (0.66, 0.96, and 0.63, respectively). The team is confident that total efficiencies of about 70% are possible with this design, once optimization is complete.

DS4G ESTEC Test ResultsDS4G Round 2 Testing

There remain significant engineering challenges, but nothing that’s incredibly different from any other high powered ion drive. Indeed, many of the complications concerning ion optics, and electrostatic field impingement in the plasma chamber, are largely eliminated by the 4-grid design. Unfortunately, there are no missions that currently have funding that require this type of thruster, so it remains on the books as “viable, but in need of some final development for application” when there’s a high-powered mission to the outer solar system.

Cesium Contact Thrusters: Liquid Metal Fueled Gridded Ion Drives

As we saw in our history of electric propulsion blog post, many of the first gridded ion engines were fueled with cesium (Cs). These systems worked well, and the advantages of having an easily storable, easily ionized, non-volatile propellant (in vapor terms, at least) were significant. However, cesium is also a reactive metal, and is toxic to boot, so by the end of the 1970s development on this type of thruster was stopped. As an additional problem, due to the inefficient and incomplete beam neutralization with the cathodes available at the time, contamination of the spacecraft by the Cs ions (as well as loss of thrust) were a significant challenge for the thrusters of the time.

Perhaps the most useful part of this type of thruster to consider is the propellant feed system, since it can be applied to many different low-melting-point metals. The propellant itself was stored as a liquid in a porous metal sponge made out of nickel, which was attached to two tungsten resistive heaters. By adjusting the size of the pores of the sponge (called Feltmetal in the documentation), the flow rate of the Cs is easily, reliably, and simply controlled. Wicks of graded-pore metal sponges were used to draw the Cs to a vaporizer, made of porous tungsten and heated with two resistive heaters. This then fed to the contact ionizer, and once ionized the propellant was accelerated using two screens.

As we’ll see in the propellant section, after looking at Hall Effect thruster, Cs (as well as other metals, such as barium) could have a role to play in the future of electric propulsion, and looking at the solutions of the past can help develop ideas in the future.

Hall Effect Thrusters

When the US was beginning to investigate the gridded ion drive, the Soviet Union was investigating the Hall Effect thruster (HET). This is a very similar concept in many ways to the ion drive, in that it uses the electrostatic effect to accelerate propellant, but the mechanism is very different. Rather than using a system of grids that are electrically charged to produce the electrostatic potential needed to accelerate the ionized propellant, in a HET the plasma itself creates the electrostatic charge through the Hall effect, discovered in the 1870s by Edwin Hall. In these thrusters, the backplate functions as both a gas injector and an anode. A radial magnetic field is produced by a set of radial solenoids and and a central solenoid, which traps the electrons that have been stripped off the propellant as it’s become ionized (mostly through electron friction), forming a toroidal electric field moving through the plasma. After the ions are ejected out of the thruster, a hollow cathode that’s very similar to the one used in the ion drives that we’ve been looking at neutralizes the plasma beam, for the same reasons as this is done on an ion drive system (this is also the source of approximately 10% of the mass flow of the propellant). This also provides the electrostatic potential used to accelerate the propellant to produce thrust. Cathodes are commonly mounted external to the thruster on a small arm, however some designs – especially modern NASA designs – use a central cathode instead.

The list of propellants used tends to be similar to what other ion drives use: krypton, argon, iodine, bismuth, magnesium and zinc have all been used (along with some others, such as NaK), but Kr and Ar are the most popular propellants. While this system has a lower average specific impulse (1500-3000 s isp) than the gridded ion drives, it has more thrust (a typical drive system used today uses 1.35 kW of power to generate 83 mN of thrust), meaning that it’s very good for either orbital inclination maintenance or reaction control systems on commercial satellites.

There are a number of types of Hall effect thruster, with the most common being the Thruster with Anode Layer (TAL), the Stationary Plasma Thruster (SPT), and the cylindrical Hall thruster (CHT). The cylindrical thruster is optimized for low power applications, such as for cubesats, and I haven’t seen a high power design, so we aren’t going to really go into those. There are two obvious differences between these designs:

  1. What the walls of the acceleration chamber are made out of: the TAL uses metallic (usually boron nitride) material for the walls, while the SPT uses an insulator, which has the effect of the TAL having higher electron velocities in the plasma than the SPT.
  2. The length of the acceleration zone, and its impact on ionization behavior: The TAL has a far shorter acceleration zone than the SPT (sort of, see Chouieri’s analytical comparison of the two systems for dimensional vs non-dimensional characteristics http://alfven.princeton.edu/publications/choueiri-jpc-2001-3504). Since the walls of the acceleration zone are a major lifetime limitator for any Hall effect thruster, there’s an engineering trade-off available here for the designer (or customer) of an HET to consider.

There’s a fourth type of thruster as well, the External Discharge Plasma Thruster, which doesn’t have an acceleration zone that’s physically constrained, that we’ll also look at, but as far as I’ve been able to find there are very few designs, most of those operating at low voltage, so they, too, aren’t as attractive for nuclear electric propulsion.

Commercially available HETs generally have a total efficiency in the range of 50-60%, however all thrusters that I’ve seen increase in efficiency as the power increases up to the design power limits, so higher-powered systems, such as ones that would be used on a nuclear electric spacecraft, would likely have higher efficiencies. Some designs, such as the dual stage TAL thruster that we’ll look at, approach 80% efficiency or better.

SPT Hall Effect Thrusters

SPT Hall cutaway rough
SPT type thruster

Stationary Plasma Thrusters use an insulating material for the propellant channel immediately downstream of the anode. This means that the electrostatic potential in the drive can be further separated than in other thruster designs, leading to greater separation of ionized vs. non-ionized propellant, and therefore potentially more complete ionization – and therefore thrust efficiency. While they have been proposed since the beginning of research into Hall effect thrusters in the Soviet Union, a lack of an effective and cost-efficient insulator that was able to survive for long enough to allow for a useful thruster lifetime was a major limitator in early designs, leading to an early focus on the TAL.

SPT Liu et al
SPT Diagram, Liu et al 2010

The SPT has the greatest depth between the gas diffuser (or propellant injector) and the nozzle of the thruster. This is nice, because it gives volume and distance to work with in terms of propellant ionization. The ionized propellant is accelerated toward the nozzle, and the not-yet-ionized portion can still be ionized, even if the plasma component is scooting it toward the nozzle by simply bouncing off the unionized portion like a billiard ball. Because of this, SPT thrusters can have much higher propellant ionization percentages than the other types of Hall effect thruster, which directly translates into greater thrust efficiency. This extended ionization chamber is made out of an electromagnetic insulator, usually boron nitride, although Borosil, a solution of BN and SiO2, is also used. Other types of materials, such as nanocrystalline diamond, graphene, and a new material called ultra-BN, or plasma assisted chemical vapor deposition built BN, have also been proposed and tested.

Worn SPT-30, NASA 1998
SPT-30 Hall thruster post testing, image courtesy NASA

The downside to this type of thruster is that the insulator is eroded during operation. Because the erosion of the propellant channel is the main lifetime limitator of this type of thruster, the longer length of the propellant channel in this type of thruster is a detriment to thruster lifetime. Improved materials for the insulator cavity are a major research focus, but replacing boron nitride is going to be a challenge because there are a number of ways that it’s advantageous for a Hall effect thruster (and also in the other application we’ve looked at for reactor shielding): in addition to it being a good electromagnetic insulator, it’s incredibly strong and very thermally conductive. The only major downside is its’ expense, especially forming it into single, large, complex shapes; so, often, SPT thrusters have two boron carbide inserts: one at the base, near the anode, and another at the “waist,” or start of the nozzle, of the SPT thruster. Inconsistencies in the composition and conductivity of the insulator can lead to plasma instabilities in the propellant due to the local magnetic field gradients, which can cause losses in ionization efficiency. Additionally, as the magnetic field strength increases, plasma instabilities develop in proportion to the total field strength along the propellant channel.

Another problem that surfaces with these sorts of thrusters is that under high power, electrical arcing can occur, especially in the cathode or at a weak point in the insulator channel. This is especially true for a design that uses a segmented insulator lining for the propellant channel.

HERMeS: NASA’s High Power Single Channel SPT Thruster

peterson_hallthruster
Dr. Peterson with TAL-2 HERMeS Test Bed, image courtesy NASA

The majority of NASA’s research into Hall thrusters is currently focused on the Advanced Electric Propulsion System, or AEPS. This is a solar electric propulsion system which encompasses the power generation and conditioning equipment, as well as a 14 kW SPT thruster known as HERMeS, or the Hall Effect Rocket with Magnetic Shielding. Originally meant to be the primary propulsion unit for the Asteroid Redirect Mission, the AEPS is currently planned for the Power and Propulsion Element (PPE) for the Gateway platform (formerly Lunar Gateway and LOP-G) around the Moon. Since the power and conditioning equipment would be different for a nuclear electric mission, though, our focus will be on the HERMeS thruster itself.

HERMeSThis thruster is designed to operate as part of a 40 kW system, meaning that three thrusters will be clustered together (complications in clustering Hall thrusters will be covered later as part of the Japanese RIAJIN TAL system). Each thruster has a central hollow cathode, and is optimized for xenon propellant.

Many materials technologies are being experimented with in the HERMeS thruster. For instance, there are two different hollow cathodes being experimented with: LaB6 (which was experimented with extensively for the NEXT gridded ion thruster) and barium oxide (BaO). Since the LaB6 was already extensively tested, the program has focused on the BaO cathode. Testing is still underway for the 2000 hour wear test; however, the testing conducted to date has confirmed the behavior of the BaO cathode. Another example is the propellant discharge channel: normally boron nitride is used for the discharge channel, however the latest iteration of the HERMeS thruster is using a boron nitride-silicon (BN-Si) composite discharge channel. This could potentially improve the erosion effects in the discharge channel, and increase the life of the thruster. As of today, the differences in plasma plume characterization are minimal to the point of being insignificant, and erosion tests are similarly inconclusive; however, theoretically, BN-Si composite could improve the lifetime of the thruster. It is also worth noting that, as with any new material, it takes time to fully develop the manufacture of the material to optimize it for a particular use.

As of the latest launch estimates, the PPE is scheduled to launch in 2022, and all development work of the AEPS is on schedule to meet the needs of the Gateway.

Nested Channel SPT Thrusters: Increasing Power and Thrust Density

Nested HET
Nested SPT Thruster, Liang 2013

One concept that has grown more popular recently (although it’s far from new) is to increase the number of propellant channels in a single thruster in what’s called a nested channel Hall thruster. Several designs have used two nested channels for the thruster. While there are a number of programs investigating nested Hall effect thrusters, including in Japan and China, we’ll use the X2 as an example, studied at the University of Michigan. While this design has been supplanted by the X3 (more on that below), many of the questions about the operation of these types of thrusters were addressed by experimenting with the X2 thruster. Generally speaking, the amount of propellant flow in the different channels is proportional to the surface area of the emitter anode, and the power and flow rate of the cathode (which is centrally mounted) is adjusted to match whether one or multiple channels are firing. Since these designs often use a single central cathode, despite having multiple anodes, a lot of development work has gone into improving the hollow cathodes for increased life and power capability. None of the designs that I saw used external cathodes, like those sometimes seen with single-channel HETs, but I’m not sure if that is just because of the design philosophies of the institutions (primarily JPL and University of Michigan) that I found while investigating this type of design, and for which I was able to access the papers.

Nested SPT Tradeoffs Liang
Image Liang 2013

There are a number of advantages to the nested-channel design. Not only is it possible to get more propellant flow from less mass and volume, but the thruster can be throttled as well. For higher thrust operation (such as rapid orbital changes), both channels are fired at once, and the mass flow through the cathode is increased to match. By turning off the central channel and leaving the outer channel firing, a medium “gear” is possible, with mass flow similar to a typical SPT thruster. The smallest channel can be used for the highest-isp operation for interplanetary cruise operations, where the lower mass flow allows for greater exhaust velocities.

X2 Different Channel Operation
Ignition sequence for dual channel operation in X2, Liang 2013

A number of important considerations were studied during the X2 program, including the investigation of anode efficiency during the different modes of operation (slight decrease in efficiency during two-channel operation, highest efficiency during inner channel only operation), interactions between the plasma plumes (harmonic oscillations were detected at 125 and 150 V, more frequent in the outer channel, but not detected at 200 V operation, indicating some cross-channel interactions that would need to be characterized in any design), and power to thrust efficiency (slightly higher during two-channel operation compared to the sum of each channel operating independently, for reasons that weren’t fully able to be characterized). The success of this program led to its’ direct successor, which is currently under development by the University of Michigan, Aerojet Rocketdyne, and NASA: the X3 SPT thruster.

X3 on Test Stand
X3 Three Channel Nested SPT on test stand, Hall 2018

The X3 is a 100 kWe design that uses three nested discharge chambers. The cathode for this thruster is a centrally mounted hollow cathode, which accounts for 7% of the total gas flow of the thruster under all modes of operation. Testing during 2016 and 2017 ranging from 5 to 102 kW, 300 to 500 V, and 16 to 247 A, demonstrated a specific impulse range of 1800 to 2650 s, with a maximum thrust of 5.4 N. As part of the NextSTEP program, the X3 thruster is part of the XR-100 electric propulsion system that is currently being developed as a flexible, high-powered propulsion system for a number of missions, both crewed and uncrewed.

LaB6 Cathode
LaB6 Cathode, Hall 2018

While this thruster is showing a great deal of promise, there remain a number of challenges to overcome. One of the biggest is cathode efficiency, which was shown to be only 23% during operation of just the outermost channel. This is a heavy-duty cathode, rated to 120 A. Due to the concerns of erosion, especially under high-power, high-flow conditions, there are three different gas injection points: through the central bore of the cathode (limited to 20 sccm), external flow injectors around the cathode keeper, and supplementary internal injectors.

The cross-channel thrust increases seen in the X2 thruster weren’t observed, meaning that this effect could have been something particular to that design. In addition, due to the interactions between the different magnetic lenses used in each of the discharge channels, the strength and configuration of each magnetic field has to be adjusted depending on the other channels that are operating, a challenge that increases with magnetic field strength.

Finally, the BN insulator was shown to expand in earlier tests to the point that a gap was formed, allowing arcing to occur from the discharge plasma to the body of the thruster. Not only does this mean that the plasma is losing energy – and therefore decreasing thrust – but it also heats the body of the thruster as well.

These challenges are all being addressed, and in the next year the 100-hour, full power test of the system will be conducted at NASA’s Glenn Research Center.

X3 Firing Modes
X3 firing in all anode configurations: 1. Inner only, 2. Middle only, 3. Outer only, 4. Inner and middle, 5. Middle and outer, 6. Inner and outer, 7 Inner, middle and outer; Florenz 2013

TAL Hall Effect Thrusters

Early USSR TAL, Kim et al
Early TAL concept, Kim et al 2007

The TAL concept has been around since the beginning of the development of the Hall thruster. In the USSR, the development of the TAL was tasked to the Central Research Institute for Machine Building (TsNIIMash). Early challenges with the design led to it not being explored as thoroughly in the US, however. In Europe and Asia, however, this sort of thruster has been a major focus of research for a number of decades. Recently, the US has also increased their study of this design as well. Since these designs have (as a general rule) higher power requirements for operation, they have not been used nearly as much as the SPT-type Hall thruster, but for high powered systems they offer a lot of promise.

As we mentioned before, the TAL uses a conductor for the walls of the plasma chamber, meaning that the radial electric charge moving across the plasma is continuous across the acceleration chamber of the thruster. Because of the high magnetic fields in this type of thruster (0.1-0.2 T), the electron cyclotron radius is very small, allowing for more efficient ionization of the propellant, and therefore limiting the size necessary for the acceleration zone. However, because a fraction of the ion stream is directed toward these conduction walls, leading to degradation, the lifetime of these types of thrusters is often shorter than their SPT counterparts. This is one area of investigation for designers of TAL thrusters, especially higher-powered variants.

As a general rule, TAL thrusters have lower thrust, but higher isp, than SPT thrusters. Since the majority of Hall thrusters are used for station-keeping, where thrust levels are a significant design consideration, this has also mitigated in favor of the SPT thruster to be in wider deployment.

High-Powered TAL Development in Japan: Clustered TAL with a Common Cathode

RAIJIN94, Hamada 2017
RAIJIN94, Hamada et al 2017

One country that has been doing a lot of development work on the TAL thruster is Japan. Most of their designs seem to be in the 5 kW range, and are being designed to operate clustered around a single cathode for charge neutralization. The RAIJIN program (Robust Anode-layer Intelligent Thruster for Japanese IN-space propulsion system) has been focusing on addressing many of the issues with high-powered TAL operation, mainly for raising satellites from low earth orbit to geosynchronous orbit (an area that has a large impact on the amount of propellant needed for many satellite launches today, and directly applicable to de-orbiting satellites as well). The RAIJIN94 thruster is a 5 kW TAL thruster under development by Kyushu University, the University of Tokyo, and the University of Mizayaki. Overall targets for the program are for a thruster that operates at 6 kW, providing 360 mN of thrust, 1700 s isp, with an anode mass flow rate of 20 mg/s and a cathode flow rate of 1 mg/s. The ultimate goal of the program is a 25 kW TAL system, containing 5 of these thrusters with a common cathode. Based on mass penalty analysis, this is a more mass-efficient method for a TAL than having a single large TAL with increased thermal management requirements. Management of anode and conductor erosion is a major focus of this program, but not one that has been written about extensively. The limiting of the thruster power to about 5 kW, though, seems to indicate that scaling a traditional TAL beyond this size, at least with current materials, is impractical.

Clustered HETs, Miyasaka 2013
Side by side HETs for testing with common cathode, Miyazaka et al 2013

There are challenges with this design paradigm, however, which also will impact other clustered Hall designs. Cathode performance, as we saw in the SPT section is a concern, especially if operating at very high power and mass flow rates, which a large cluster would need. Perhaps a larger consideration was plasma oscillations that occurred in the 20 kHz range when two thrusters were fired side by side, as was done (and continues to be done) at Gifu University. It was found that by varying the mass flow rate, operating at a slightly lower power, and maintaining a wider spacing of the thruster heads, the plasma flow instabilities could be accounted for. Experiments continue there to study this phenomenon, and the researchers, headed by Dr. Miyasaka, are confident that this issue can be managed.

Dual Stage TAL and VHITAL

Two Stage TALOne of the most interesting concepts investigated at TsNIIMash was the dual-stage TAL, which used two anodes. The first anode is very similar to the one used in a typical TAL or SPT, which also serves as an injector for the majority of the propellant and provides the charge to ionize the propellant. As the plasma exits this first anode, it encounters a second anode at the opening of the propellant channel, which accelerates the propellant. An external cathode is used to neutralize the beam. This design demonstrated specific impulses of up to 8000s, among the highest (if not the highest) of any Hall thruster to date. The final iteration during this phase of research was the water-cooled TAL-160, which operated at a power consumption from 10-140 kW.

VHITAL Propellant Reservoir
VHITAL bismuth propellant feed system, Sengupta et al 2007

Another point of interest with this design is the use of bismuth as the propellant for the thruster. As we’ll see below, propellant choice for an electrostatic thruster is very broad, and the choice of propellant you use is subject to a number of characteristics. Bismuth is reasonably inexpensive, relatively common, and storable as a solid. This last point is also a headache for an electrostatic thruster, since ionized powders are notorious for sticking to surfaces and gumming up the works, as it were. In this case, since bismuth has a reasonably low melting temperature, a pre-heater was used to resistively heat the bismuth, and then an electromagnetic pump was used to propel it toward the anode. Just before injection into the thruster, a vaporization plug of carbon was used to ensure proper mass flow into the thruster. As long as the operating temperature of the thruster was high enough, and the mass flow was carefully regulated, this novel fueling concept was not a problem.

VHITAL 160 at TsNIIMash
VHITAL mounted to test bracket at TsNIIMash, Sangupta et al 2007

This design was later picked up in 2004 by NASA, who worked with TsNIIMash researchers to develop the VHITAL, or Very High Isp Thruster with Anode Layer, over two and a half years. While this thruster uses significantly less power (36 kW as opposed to up to 140 kW), many of the design details are the same, but with a few major differences: the NASA design is radiatively cooled rather than water cooled, it added a resistive heater to the base of the first anode as well, and tweaks were made to the propellant feed system. The original TAL-160 was used for characterization tests, and the new VHITAL-160 thruster and propellant feed system were built to characterize the system using modern design and materials. Testing was carried out at TsNIIMash in 2006, and demonstrated stable operation without using a neutralizing cathode, and expected metrics were met.

While I have been able to find a summary presentation from the time of the end of the program in the US, I have been unable to find verified final results of this program. However, 8000 s isp was demonstrated experimentally at 36 kW, with a thrust of ~700 mN and a thrust efficiency of close to 80%.

If anyone has additional information about this program, please comment below or contact me via email!

Hybrid and Non-Traditional Hall Effect Thrusters: Because Tweaking Happens

PLAS40 Sketch
PLa

As we saw with the VHITAL, the traditional SPT and TAL thrusters – while the most common – are far from the only way to use these technologies. One interesting concept, studied by EDB Fakel in Russia, is a hybrid SPT-TAL thruster. SPT thrusters, due to their extended ionization chamber lined with an insulator, generally provide fuller ionization of propellant. TAL thrusters, on the other hand, are better able to efficiently accelerate the propellant once it’s ionized. So the designers at EDB Fakel, led by M. Potapenko, developed, built, and tested the PlaS-40 Hybrid PT, rated at up to 0.4 kW, and proposed and breadbox tested a larger (up to 4.5 kW) PlaS-120 thruster as well, during the 1990s (initial conception was in the early 90s, but the test

PLAS40
PLaS40 Thruster

model was built in 1999). While fairly similar in outward appearance to an SPT, the acceleration chamber was shorter. The PlaS-40 achieved 1000-1750 s isp and a thrust of 23.5 mN, while the PlaS-120 showed the capability of reaching 4000 s isp and up to 400 mN of thrust (these tests were not continued, due to a lack of funding). This design concept could offer advances in specific impulse and thrust efficiency beyond traditional thruster designs, but currently there isn’t enough research to show a clear advantage.

 

Gridded Hall concept
Early EKB Fakel concept for SPT thruster with “magnetic screens.” Kim et al 2007

Another interesting hybrid design was a gridded Hall thruster, researched by V. Kim at Fakel in 1973-1975. Here, again, an SPT-type ionization chamber was used, and the screens were used to more finely control the magnetic lensing effect of the thruster. This was an early design, and one that was used due to the limitations of the knowledge and technology to do away with the grids. However, it’s possible that a hybrid Hall-gridded ion thruster may offer higher specific impulse while taking advantage of the more efficient ionization of an SPT thruster. As we saw with both the DS4G and VHITAL, increasing separation of the ionization, ion extraction, and acceleration portions of the thruster allows for a greater thrust efficiency, and this may be another mechanism to do that.

One design, out of the University of Michigan, modifies the anode itself, by segmenting it into many different parts. This was done to manage plasma instabilities within the propellant plume, which cause parasitic power losses. While it’s unclear exactly how much efficiency can be gained by this, it solves a problem that had been observed since the 1960s close to the anode of the thruster. Small tweaks like this may end up changing the geometry of the thruster significantly over time as optimization occurs.

Other modifications have been made as well, including combining discharge chambers, using conductive materials for discharge chambers but retaining a dielectric ceramic in the acceleration zone of the thruster, and many other designs. Many of these were early ideas that were demonstrated but not carried through for one reason or another. For instance, the metal discharge chambers were considered an economic benefit, because the ceramic liners are the major cost-limiting factor in SPT thrusters. With improved manufacturing and availability, costs went down, and the justification went away.

There remains an incredible amount of flexibility in the Hall effect thruster design space. While two stage, nested, and clustered designs are the current most advanced high power designs, it’s difficult to guess if someone will come up with a new idea, or revisit an old one, to rewrite the field once again.

Propellants: Are the Current Propellant Choices Still Effective For High Powered Missions?

One of the interesting things to consider about these types of thrusters, both the gridded ion and Hall effect thrusters, is propellant choice. Xenon is, as of today, the primary propellant used by all operational electrostatic thrusters (although some early thrusters used cesium and mercury for propellants), however, Xe is rare and reasonably expensive. In smaller Hall thruster designs, such as for telecommunications satellites in the 5-10 kWe thruster range, the propellant load (as of 1999) for many spacecraft is less than 100 kg – a significant but not exorbitant amount of propellant, and launch costs (and design considerations) make this a cost effective decision. For larger spacecraft, such as a Hall-powered spacecraft to Mars, the propellant mas could easily be in the 20-30 ton range (assuming 2500 s isp, and a 100 mg/s flow rate of Xe), which is a very different matter in terms of Xe availability and cost. Alternatives, then, become far more attractive if possible.

Argon is also an attractive option, and is often proposed as a propellant as well, being less rare. However, it’s also considerably lower mass, leading to higher specific impulses but lower levels of thrust. Depending on the mission, this could be a problem if large changes in delta-vee are needed in a shorter period of time, The higher ionization energy requirements also mean that either the propellant won’t be as completely ionized, leading to loss of efficiency, or more energy is required to ionize the propellant

The next most popular choice for propellant is krypton (Kr), the next lightest noble gas. The chemical advantages of Kr are basically identical, but there are a couple things that make this trade-off far from straightforward: first, tests with Kr in Hall effect thrusters often demonstrate an efficiency loss of 15-25% (although this may be able to be mitigated slightly by optimizing the thruster design for the use of Kr rather than Xe), and second the higher ionization energy of Kr compared to Xe means that more power is required to ionize the same amount of propellant (or with an SPT, a deeper ionization channel, with the associated increased erosion concerns). Sadly, several studies have shown that the higher specific impulse gained from the lower atomic mass of Kr aren’t sufficient to make up for the other challenges, including losses from Joule heating (which we briefly discussed during our discussion of MPD thrusters in the last post), radiation, increased ionization energy requirements, and even geometric beam divergence.

This has led some designers to propose a mixture of Xe and Kr propellants, to gain the advantages of lower ionization energy for part of the propellant, as a compromise solution. The downside is that this doesn’t necessarily improve many of the problems of Kr as a propellant, including Joule heating, thermal diffusion into the thruster itself, and other design headaches for an electrostatic thruster. Additionally, some papers report that there is no resonant ionization phenomenon that facilitates the increase of partial krypton utilization efficiency, so the primary advantage remains solely cost and availability of Kr over Xe.

Atomic Mass (Ar, std.) Ionization Energy (1st, kJ/mol) Density (g/cm^3) Melting Point (K) Boiling Point (K) Estimated Cost ($/kg)
Xenon 131.293 1170.4 2.942 (BP) 161.4 165.051 1200
Krypton 83.798 1350.8 2.413 (BP) 115.78 119.93 75
Bismuth 208.98 703 10.05 (MP) 544.7 1837 29
Mercury 200.592 1007.1 13.534 (at STP) 234.32 629.88 500
Cesium 132.905 375.7 1.843 (at MP) 301.7 944 >5000
Sodium 22.989 495.8 0.927 (at MP) 0.968 (solid) 370.94 1156.09 250
Potassium 39.098 418.8 0.828 (MP) 0.862 (solid) 336.7 1032 1000
Argon 39.792 1520.6 1.395 (BP) 83.81 87.302 5
NaK Varies Differential 0.866 (20 C) 260.55 1445 Varies
Iodine 126.904 1008.4 4.933 (at STP) 386.85 457.4 80
Magnesium 24.304 737.7 1.584 (MP) 923 1363 6
Cadmium 112.414 867.8 7.996 (MP) 594.22 1040 5

 

Early thrusters used cesium and mercury for propellant, and for higher-powered systems this may end up being an option. As we’ve seen earlier in this post, neither Cs or Hg are unknown in electrostatic propulsion (another design that we’ll look at a little later is the cesium contact ion thruster), however they’ve fallen out of favor. The primary reason always given for this is environmental and occupational health concerns, for the development of the thrusters, the handling of the propellant during construction and launch, as well as the immediate environment of the spacecraft. The thrusters have to be built and extensively tested before they’re used on a mission, and all these experiments are a perfect way to strongly contaminate delicate (and expensive) equipment such as thrust stands, vacuum chambers, and sensing apparatus – not to mention the lab and surrounding environment in the case of an accident. Additionally, any accident that leads to the exposure of workers to Hg or Cs will be expensive and difficult to address, notwithstanding any long term health effects of chemical exposure to any personnel involved (handling procedures have been well established, but one worker not wearing the correct personal protective equipment could be constantly safe both in terms of personal and programmatic health) Perfect propellant stream neutralization is something that doesn’t actually occur in electrostatic drives (although as time goes on, this has consistently improved), leading to a buildup of negative charge in the spacecraft; and, subsequently, a portion of the positive ions used for propellant end up circling back around the magnetic fields and impacting the spacecraft. Not only is this something that’s a negative impact for the thrust of the spacecraft, but if the propellant is something that’s chemically active (as both Cs and Hg are), it can lead to chemical reactions with spacecraft structural components, sensors, and other systems, accelerating degradation of the spacecraft.

A while back on the Facebook group I asked the members about the use of these propellants, and an interesting discussion developed (primarily between Mikkel Haaheim, my head editor and frequent contributor to this blog, and Ed Pheil, who has extensive experience in nuclear power, including the JIMO mission, and is currently the head of Elysium Industries, developing a molten chloride fast reactor) concerning the pros and cons of using these propellants. Two other options, with their own complications from the engineering side, were also proposed, which we’ll touch on briefly: sodium and potassium both have low ionization energies, and form a low melting temperature eutectic, so they may offer additional options for future electrostatic propellants as well. Three major factors came up in the discussion: environmental and occupational health concerns during testing, propellant cost (which is a large part of what brings us to this discussion in the first place), and tankage considerations.

As far as cost goes, this is listed in the table above. These costs are all ballpark estimates, and costs for space-qualified supplies are generally higher, but it illustrates the general costs associated with each propellant. So, from an economic point of view, Cs is the least attractive, while Hg, Kr, and Na are all attractive options for bulk propellants.

Tankage in and of itself is a simpler question than the question of the full propellant feed question, however it can offer some insights into the overall challenges in storing and using the various propellants. Xe, our baseline propellant, has a density as a liquid of 2.942 g/cm, Kr of 2.413, and Hg of 13.53. All other things aside, this indicates that the overall tankage mass requirements for the same mass of Hg are less than 1/10th that of Xe or Kr. However, additional complications arise when considering tank material differences. For instance, both Xe and Kr require cryogenic cooling (something we discussed in the LEU NTP series briefly, which you can read here [insert LEU NTP 3 link]. While the challenges of Xe and Kr cryogenics are less difficult than H2 cryogenics due to the higher atomic mass and lower chemical reactivity, many of the same considerations do still apply. Hg on the other hand, has to be kept in a stainless steel tank (by law), other common containers, such as glass, don’t lend themselves to spacecraft tank construction. However, a stainless steel liner of a carbon composite tank is a lower-mass option.

The last type of fluid propellant to mention is NaK, a common fast reactor coolant which has been extensively studied. Many of the problems with tankage of NaK are similar to those seen in Cs or Hg: chemical reactivity (although different particulars on the tankage), however, all the research into using NaK for fast reactor coolant has largely addressed the immediate corrosion issues.

The main problem with NaK would be differential ionization causing plating of the higher-ionization-energy metal (Na in this case) onto the anode or propellant channels of the thruster. While it may be possible to deal with this, either by shortening the propellant channel (like in a TAL or EDPT), or by ensuring full ionization through excess charge in the anode and cathode. The possibility of using NaK was studied in an SPT thruster in the Soviet Union, but unfortunately I cannot find the papers associated with these studies. However, NaK remains an interesting option for future thrusters.

Solid propellants are generally considered to be condensable propellant thrusters. These designs have been studied for a number of decades. Most designs use a resistive heater to melt the propellant, which is then vaporized just before entering the anode. This was first demonstrated with the cesium contact gridded ion thrusters that were used as part of the SERT program. There (as mentioned earlier) a metal foam was used as the storage medium, which was kept warm to the point that the cesium was kept liquid. By varying the pore size, a metal wick was made which controlled the flow of the propellant from the reservoir to the ionization head. This results in a greater overall mass for the propellant tankage, but on the other hand the lack of moving parts, and the ability to ensure even heating across the propellant volume, makes this an attractive option in some cases.

A more recent design that we also discussed (the VHITAL) uses bismuth propellant for a TAL thruster, a NASA update of a Soviet TsNIIMash design from the 1970s (which was shelved due to the lack of high-powered space power systems at the time). This design uses a reservoir of liquid bismuth, which is resistively heated to above the melting temperature. An argon pressurization system is used to force the liquid bismuth through an outlet, where it’s then electromagnetically pumped into a carbon vaporization plug. This then discharges into the anode (which in the latest iteration is also resistively heated), where the Hall current then ionizes the propellant. It may be possible with this design to use multiple reservoirs to reduce the power demand for the propellant feed system; however, this would also lead to greater tankage mass requirements, so it will largely depend on the particulars of the system whether the increase in mass is worth the power savings of using a more modular system. This propellant system was successfully tested in 2007, and could be adapted to other designs as well.

Other propellants have been proposed as well, including magnesium, iodine, and cadmium. Each has its’ advantages and disadvantages in tankage, chemical reactivity limiting thruster materials considerations, and other factors, but all remain possible for future thruster designs.

For the foreseeable future, most designs will continue to use xenon, with argon being the next most popular choice, but as the amount of propellant needed increases with the development of nuclear electric propulsion, it’s possible that these other propellant options will become more prominent as tankage mass, propellant cost, and other considerations become more significant.

Electrospray Thrusters

Electrospray thrusters use electrically charged liquids as a propellant. They fall into three main categories: colloid thrusters, which accelerate charged droplets dissolved in a solvent such as glycerol or formamide; field emission electric propulsion (FEEP) thrusters, which use liquid metals to produce positively charged metal ions; and, finally, ionic liquid ion source (ILIS) thrusters, which use room temperature molten salts to produce a beam of salt ions.

Colloid Thruster Schematic,
Colloid thruster operational diagram, Prajana et al 2001

All types of electrospray end up demonstrating a phenomenon known as a Taylor cone, which occurs in an electrically charged fluid when exposed to an electrical field. If the field is strong enough, the tip of the cone is extruded to the point that it breaks, and a spray of droplets from the liquid is emitted. This is now commonly used in many different industrial applications, and the advances in these fields have made the electrospray thruster more attractive, as have a focus on volume of propulsion systems. Additionally, the amount of thrust produced, and the thrust density, is directly proportional to the density of emitters in a given area. Recent developments in nanomaterials fabrication have made it possible to increase the thrust density of these designs significantly. However, the main lifetime limitation of this type of thruster is emitter wear, which is dependent on both mass flow rates and any chemical interactions between the emitters and the propellant.

TILE5000
TILE5000, Accion Space Systems

The vast majority of these systems focus on cube-sat propulsion; but one company, Accion Systems, has developed a tileable system which could offer high-powered operation through the use of dozens of thrusters arrayed in a grid. Their largest thruster (which measures 35mm by 35 mm by 16 mm, including propellant) produces a total of 200,000 N of impulse, a thrust of 10 mN, at an isp of 1500 s. While their primary focus is on cubesats, the CEO, Natalya Bailey, has mentioned before that it would be possible to use many of their TILE drive systems in parallel for high-powered missions.

 

One of the biggest power demands of an electrostatic engine of almost any type is the ionization cost of the propellant. Depending on the mass flow and power, different systems are used to ionize the propellant, including electron beams, RF ionization, cyclotron resonance, and the Hall effect. What if we could get rid of that power cost, and instead use all of the energy accelerating the propellant? Especially in small spacecraft, this is very attractive, and it may be possible to scale this up significantly as well (to the limits of the electrical charge that is able to be placed on the screens themselves). Some fluids are ionic, meaning that they’re positively charged, reasonably chemically stable, and easily storable. By replacing the uncharged propellant with one that carries an electric charge without the need for on-board ionization equipment, mass, volume, and power can be conserved. Not all electrospray thrusters use an ionic liquid, but ones that do offer considerable advantages in terms of energy efficiency, and possibly can offer greater overall thruster efficiency as well. I have yet to see a design for a gridded ion or Hall effect thruster that utilizes these types of propellants, but it may be possible to do so.

Conclusions

With that, we come to the end of our overview of electric thrusters. While there are some types of thruster that we did not discuss, they are unlikely to be able to be used in high powered systems such as would be found on an NEP spacecraft. When I began this series of blog posts, I knew that electric propulsion is a very broad topic, but the learning process during writing these three posts has been far more intense, and broad, than I was expecting. Electric propulsion has never been my strong suit, so I’ve been even more careful than usual to stick to the resources available to write these posts, and I’ve had a lot of help from some very talented people to get to this point.

I was initially planning on writing a post about the power conditioning units that are used to prepare the power provided by the power supply to these thrusters, but the more I researched, the less these systems made sense to me – something that I’ve been assured isn’t uncommon – so I’m going to skip that for now.

Instead, the next post is going to look at the power conversion systems that nuclear electric spacecraft can use. Due to the unique combination of available temperature from a nuclear reactor, the high power levels available, and the unique properties of in-space propulsion, there are many options available that aren’t generally considered for terrestrial power plants, and many designs that are used by terrestrial plants aren’t available due to mass or volume requirements. I’ve already started writing the post, but if there’s anything writing on NEP has taught me, it’s that these posts take longer than I expect, so I’m not going to give a timeline on when that will be available – hopefully in the next 2-3 weeks, though.

After that, we’ll look more in depth at thermal management and heat rejection systems for a wide range of temperatures, how they work, and the fundamental limitations that each type has. After another look at the core of an NEP spacecraft’s reactor, we will then look at combining electric and thermal propulsion in a post on bimodal NTRs, before moving on to our next blog post series (probably on pulse propulsion, but we may return to NTRs briefly to look at liquid core NTRs and the LARS proposal).

I hope you enjoyed the post. Leave a comment below with any comments, questions, or corrections, and don’t forget to check out our Facebook group, where I post work-in-progress visuals, papers I come across during research, and updates on the blog (and if you do, don’t feel shy about posting yourself on astronuclear propulsion designs and news!).

References

Electrostatic Thrusters

Gridded Ion Thrusters

MIT Open Courseware Astronautics Course Notes, Lecture 10-11: Kaufmann Ion Drives https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-522-space-propulsion-spring-2015/lecture-notes/MIT16_522S15_Lecture10-11.pdf

NSTAR Technology Validation, Brophy et al 2000 https://trs.jpl.nasa.gov/handle/2014/13884

The High Power Electric Propulsion (HiPEP) Ion Thruster, Foster et al 2004 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040139476.pdf

Dual stage 4 Grid thruster page, ASU: https://physics.anu.edu.au/cpf/sp3/ds4g/

Dual Stage Gridded Ion Thruster ESA page: http://www.esa.int/gsp/ACT/projects/ds4g_overview.html

RIT-10 http://www.space-propulsion.com/brochures/electric-propulsion/electric-propulsion-thrusters.pdf

The NASA Evolutionary Xenon Thruster: The Next Step for U.S. Deep Space Propulsion, Schmidt et al NASA GRC 2008 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20080047732.pdf

Deflectable Beam Linear Strip Cesium Contact Ion Thruster System, Dulgeroff et al 1971 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19710027900.pdf

Hall Effect Thrusters

Fundamental Difference Between the Two Variants of Hall Thruster: SPT and TAL http://alfven.princeton.edu/publications/choueiri-jpc-2001-3504

History of the Hall Thrusters Development in the USSR, Kim et al 2007 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2007index/IEPC-2007-142.pdf

Recent Progress and Perspectives on Space Electric Propulsion Systems Based on Smart Nanomaterials, Levchenko et al 2018 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5830404/pdf/41467_2017_Article_2269.pdf

SPT

Testing of the PPU Mk 3 with the XR-5 Hall Effect Thruster, Xu et al, Aerojet Rocketdyne 2017 https://iepc2017.org/sites/default/files/speaker-papers/iepc-2017-199.pdf

AEPS and HERMeS

Overview of the Development and Mission Application of the Advanced Electric Propulsion System (AEPS), Herman et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180001297.pdf

13kW Advanced Electric Propulsion Flight System Development and Qualification, Jackson et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180000353.pdf

Wear Testing of the HERMeS Thruster, Williams et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170000963.pdf

Performance, Stability, and Plume Characterization of the HERMeS Thruster with Boron Nitride Silica Composite Discharge Channel, Kamhawi et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180000687.pdf

Hollow Cathode Assembly Development for the HERMeS Hall Thruster, Sarver-Verhey et al 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170001280.pdf

Nested HET

Constant-Power Performance and Plume Effects of a Nested-Channel Hall-Effect Thruster, Liang et al University of Michigan 2011 http://pepl.engin.umich.edu/pdf/IEPC-2011-049.pdf

The Combination of Two Concentric Discharge Channels into a Nested Hall Effect Thruster, Liang PhD Thesis, University of Michigan 2013 http://pepl.engin.umich.edu/pdf/2013_Liang_Thesis.pdf

Plasma Oscillation Effects on Nested Hall Thruter Operation and Stability, McDonald et al University of Michigan 2013 http://pepl.engin.umich.edu/pdf/IEEE-2013-2502.pdf

Investigation of Channel Interactions in a Nested Hall Thruster Part 1, Georgin et al University of Michigan 2016 http://pepl.engin.umich.edu/pdf/JPC_2016_Georgin.pdf

Investigation of Channel Interactions in a Nested Hall Thruster Part 2, Cusson et al University of Michigan 2016 http://pepl.engin.umich.edu/pdf/JPC_2016_Cusson.pdf

X3 NHT

The X3 100 kWclass Nested Hall Thruster: Motivation, Implementation, and Initial Performance; Florenz, University of Michigan doctoral thesis http://pepl.engin.umich.edu/pdf/2014_Florenz_Thesis.pdf

First Firing of a 100-kW Nested Channel Hall Thruster, Florenz et al University of Michigan, 2013 http://www.dtic.mil/dtic/tr/fulltext/u2/a595910.pdf

High-Power Performance of a 100-kW Class Nested Hall Thruster, Hall et al University of Michigan 2017 http://pepl.engin.umich.edu/pdf/IEPC-2017-228.pdf

Update on the Nested Hall Thruster Subsystem for the NextSTEP XR-100 System, Jorns et al University of Michigan 2018 http://pepl.engin.umich.edu/pdf/AIAA-2018-4418.pdf

Multichannel Hall Effect Thruster patent, McVey et al, Aerojet Rocketdyne https://patents.google.com/patent/US7030576B2/en

MW-Class Electric Propulsion Systems Designs for Mars Cargo Transport; Gilland et al 2011 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120001636.pdf

TAL

An Overview of the TsNIIMASH/TsE Efforts under the VHITAL Program, Tverdokhlebov et al, TsNIIMASH, 2005 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2005index/141.pdf

Thrust Performance in a 5 kW Class Anode Layer Type Hall Thruster, Yamamoto et al Kyushu University 2015 https://www.jstage.jst.go.jp/article/tastj/14/ists30/14_Pb_183/_pdf

Investigation of a Side by Side Hall Thruster System, Miyasaka et al 2013 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2013index/1vrun4h9.pdf

Modeling of a High Power Thruster with Anode Layer, Keidar et al University of Michigan 2004 https://deepblue.lib.umich.edu/bitstream/handle/2027.42/69797/PHPAEN-11-4-1715-1.pdf;sequence=2

Design and Performance Evaluation of Thruster with Anode Layer UT-58 For High-Power Application, Schonherr et al, University of Tokyo 2013 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2013index/k97rps25.pdf

Very High Isp Thruster with Anode Layer (VHITAL): An Overview, Marrese-Reading et al, JPL 2004 https://trs.jpl.nasa.gov/handle/2014/40499

The Development of a Bismuth Feed for the Very High Isp Thruster with Anode Layer VHITAL program, Marrese-Reading et al, JPL 2004 https://trs.jpl.nasa.gov/handle/2014/37752

Hybrid

Characteristic Relationship Between Dimensions and Parameters of a Hybrid Plasma Thruster, Potapenko et al 2011 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2011index/IEPC-2011-042.pdf

Measurement of Cross-Field Electron Current in a Hall Thruster due to Rotating Spoke Instabilities, McDonald et al 2011 https://core.ac.uk/download/pdf/3148750.pdf

Metallic Wall Hall Thruster patent, Goebel et al 2016 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20160012003.pdf

History of the Hall Thrusters Development in the USSR, Kim et al 2007 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2007index/IEPC-2007-142.pdf

Propellant Considerations:

High Power Hall Thrusters; Jankovsky et al, 1999

Energetics of Propellant Options for High-Power Hall Thrusters, Kieckhafer and King, Michigan Technological University, 2005 http://aerospace.mtu.edu/__reports/Conference_Proceedings/2005_Kieckhafer_1.pdf

A Performance Comparison Of Xenon and Krypton Propellant on an SPT-100 Hall Thruster, Nakles et al 2011 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2011index/IEPC-2011-003.pdf

Evaluation of Magnesium as Hall Thruster Propellant; Hopkins, Michigan Tech, 2015 https://pdfs.semanticscholar.org/0520/494153e1d19a46eaa0a63c9bc5bd466c06eb.pdf

Modeling of an Iodine Hall Thruster Plume in the Iodine Satellite (ISAT), Choi 2017 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170001565.pdf

Electrospray Thrusters

Electrospray Thruster lecture notes, MIT Open Courseware 2015 https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-522-space-propulsion-spring-2015/lecture-notes/MIT16_522S15_Lecture20.pdf

Application of Ion Electrospray Propulsion to Lunar and Interplanetary Missions, Whitlock and Lozano 2014 http://ssl.mit.edu/files/website/theses/SM-2014-WhitlockCaleb.pdf

Preliminary Sizing of an Electrospray Thruster, Rostello 2017 http://tesi.cab.unipd.it/56748/1/Rostello_Marco_1109399.pdf

Accion Systems TILE Data sheet https://beyondnerva.files.wordpress.com/2018/10/bd6da-tileproductfamilycombineddatasheet.pdf

 

Categories
Development and Testing Electric propulsion History Non-nuclear Testing Nuclear Electric Propulsion Spacecraft Concepts

Electric Propulsion: The Oldest “Futuristic” Propulsion Possibility

Hello, and welcome back to Beyond NERVA. Today, we are looking at a very popular topic, but one that doesn’t necessarily require nuclear power: electric propulsion. However, it IS an area that nuclear power plants are often tied to, because the amount of thrust available is highly dependent on the amount of power available for the drive system. We will touch a little bit on the history of electric propulsion, as well as the different types of electric thrusters, their advantages and disadvantages, and how fission power plants can change the paradigm for how electric thrusters can be used. It’s important to realize that most electric propulsion is power-source-agnostic: all they require is electricity; how it’s produced usually doesn’t mean much to the drive system itself. As such, nuclear power plants are not going to be mentioned much in this post, until we look at the optimization of electric propulsion systems.

We also aren’t going to be looking at specific types of thrusters in this post. Instead, we’re going to do a brief overview of the general types of electric propulsion, their history, and how electrically propelled spacecraft differ from thermally or chemically propelled spacecraft. The next few posts will focus more on the specific technology itself, its’ application, and some of the current options for each type of thruster.

Electric Propulsion: What is It?

In its simplest definition, electric propulsion is any means of producing thrust in a spacecraft using electrical energy. There’s a wide range of different concepts that get rolled into this concept, so it’s hard to make generalizations about the capabilities of these systems. As a general rule of thumb, though, most electric propulsion systems are low-thrust, long-burn-time systems. Since they’re not used for launch, and instead for on-orbit maneuvering or interplanetary missions, the fact that these systems generally have very little thrust is a characteristic that can be worked with, although there’s a great deal of variety as far as how much thrust, and how efficient in terms of specific impulse, these systems are.

There are three very important basic concepts to understand when discussing electric propulsion: thrust-to-weight ratio (T/W), specific impulse (isp), and burn time. The first is self-explanatory: how much does the engine weigh, compared to how hard it can push, commonly in relation to Earth’s gravity: a T/W ratio of 1/1 means that the engine can hover, basically, but no more, A T/W ratio of 3/1 means that it can push just less than 3 times its weight off the ground. Specific impulse is a measure of how much thrust you get out of a given unit of propellant, and ignores everything else, including the weight of the propulsion system; it’s directly related to fuel efficiency, and is measured in seconds: if the drive system had a T/W ratio of 1/1, and was entirely made out of fuel, this would be the amount of time it could hover (assuming the engine is completely made out of propellant) for any given mass of fuel at 1 gee. Finally, you have burn time: the T/W ratio and isp give you the amount of thrust imparted per unit time based on the mass of the drive system and of the propellant, then the spacecraft’s mass is factored into the equation to give the total acceleration on the spacecraft for a given unit of time. The longer the engine burns, the more overall acceleration is produced.

Electric propulsion has a very poor thrust-to-weight ratio (as a general rule), but incredible specific impulse and burn times. The T/W ratio of many of the thrusters is very low, due to the fact that they provide very little thrust, often measured in micronewtons – often, the thrust is illustrated in pieces of paper, or pennies, in Earth gravity. However, this doesn’t matter once you’re in space: with no drag, and orbital mechanics not requiring the huge amounts of thrust over a very short period of time, the total amount of thrust is more important for most maneuvers, not how long it takes to build up said thrust. This is where the burn time comes in: most electric thrusters burn continuously, providing minute amounts of thrust over months, sometimes years; they push the spacecraft in the direction of travel until halfway through the mission, then turn around and start decelerating the spacecraft halfway through the trip (in energy budget terms, not necessarily in total mission time). The trump card for electric propulsion is in specific impulse: rather than the few hundred seconds of isp for chemical propulsion, or the thousand or so for a solid core nuclear thermal rocket, electric propulsion gives thousands of seconds of isp. This means less fuel, which in turn makes the spacecraft lighter, and allows for truly astounding total velocities; the downside to this is that it takes months or years to build these velocities, so escaping a gravity well (for instance, if you’re starting in low Earth orbit) can take months, so it’s best suited for long trips, or for very minor changes in orbit – such as for communications satellites, where it’s made these spacecraft smaller, more efficient, and with far longer lifetimes.

Electric propulsion is an old idea, but one that has yet to reach its’ full potential due to a number of challenges. Tsiolkovsy and Goddard both wrote about electric propulsion, but because neither was living in a time that it was possible to get into orbit, their ideas went unrealized in their lifetimes. The reason for this is that electric propulsion isn’t suitable for lifting rockets off the surface of a planet, but for in-space propulsion it’s incredibly promising. They both showed that the only thing that matters for a rocket engine is that, to put it simply, some mass needs to be thrown out the back of the rocket to provide thrust, it doesn’t matter what that something is. Electricity isn’t (directly) limited by thermodynamics (except through entropic losses), only by electric potential differences, and can offer very efficient conversion of electric potential to kinetic energy (the “throwing something out of the back” part of the system).

In chemical propulsion, combustion is used to cause heat to be produced, which causes the byproducts of the chemical reaction to expand and accelerate. This is then directed out of a nozzle to increase the velocity of the exhaust and provide lift. This is the first type of rocket ever developed; however, while advances are always being produced, in many ways the field is chasing after more and more esoteric or exotic ways to produce ever more marginal gains. The reason for this is that there’s only so much chemical potential energy available in a given system. The most efficient chemical engines top out around 500 seconds of specific impulse, and most hover around the 350 mark. The place that chemical engines excel though, is in thrust-to-weight ratio. They remain – arguably – our best, and currently our only, way of actually getting off Earth.

Thermal propulsion doesn’t rely on the chemical potential energy, instead the reaction mass is directly heated from some other source, causing expansion. The lighter the propellant, the more it expands, and therefore the more thrust is produced for a given mass; however, heavier propellants can be used to give more thrust per unit volume, at lower efficiencies. It should be noted that thermal propulsion is not only possible, but also common, with electrothermal thrusters, but we’ll dig more into that later.

Electric propulsion, on the other hand, is kind of a catch-all term when you start to look at it. There are many mechanisms for changing electrical energy into kinetic energy, and looking at most – but not all – of the options is what this blog post is about.

In order to get a better idea of how these systems work, and the fundamental principles behind electric propulsion, it may be best to look into the past. While the potential of electric propulsion is far from realized, it has a far longer history than many realize.

Futuristic Propulsion? … Sort Of, but With A Long Pedigree

The Origins of Electric Propulsion

Goddard drive drawing
First Patented Ion Drive, Robert Goddard 1917

When looking into the history of spaceflight, two great visionaries stand out: Konstantin Tsiolkosky and Robert Goddard. Both worked independently on the basics of rocketry, both provided much in the way of theory, and both were visionaries seeing far beyond their time to the potential of rocketry and spaceflight in general. Both were working on the questions of spaceflight and rocketry at the turn of the 20th century. Both also independently came up with the concept of electric propulsion; although who did it first requires some splitting of hairs: Goddard mentioned it first, but in a private journal, while Tsiolkovsky published the concept first in a scientific paper, even if the reference is fairly vague (considering the era, understandably so). Additionally, due to the fact that electricity was a relatively poorly understood phenomenon at the time (the nature of cathode and anode “rays” was much debated, and positively charged ions had yet to be formally described); and neither of these visionaries had a deep understanding of the concepts involved, their ideas being little more than just that: concepts that could be used as a starting point, not actual designs for systems that would be able to be used to propel a spacecraft.

 

Tsilkovsky small portrait
Konstantin Tsiolkovsky, image via Wikimedia

The first mention of electric propulsion in the formal scientific literature was in 1911, in Russia. Konstantin Tsiolkovsky wrote that “it is possible that in time we may use electricity to produce a large velocity of particles ejected from a rocket device.” He began to focus on the electron, rather than the ion, as the ejected particle. While he never designed a practical device, the promise of electric propulsion was clearly seen: “It is quite possible that electrons and ions can be used, i.e. cathode and especially anode rays. The force of electricity is unlimited and can, therefore, produce a powerful flux of ionized helium to serve a spaceship.” The lack of understanding of electric phenomena hindered him, though, and prevented him from ever designing a practical system, much less building one.

 

220px-Dr._Robert_H._Goddard_-_GPN-2002-000131
Robert Goddard, image viaWikimedia

The first mention of electric propulsion in history is from Goddard, in 1906, in a private notebook, but as noted by Edgar Choueiri, in his excellent historical paper published in 2004 (a major source for this section), these early notes don’t actually describe (or even reference the use of) an electric propulsion drive system. This wasn’t a practical design (that didn’t come until 1917), but the basic principles were laid out for the acceleration of electrons (rather than positively charged ions) to the “speed of light.” For the next few years, the concept fermented in his mind, culminating in patents in 1912 (for an ionization chamber using magnetic fields, similar to modern ionization chambers) and in 1917 (for a “Method and Means for Producing Electrified Jets of Gas”). The third of three variants was for the first recognizable electric thruster, whichwould come to be known as an electrostatic thruster. Shortly after, though, America entered WWI, and Goddard spent the rest of his life focused on the then-far-more-practical field of chemical propulsion.

 

Кондратюк,_Юрий
Yuri Kondratyuk, image wia Wikimedia

Other visionaries of rocketry also came up with concepts for electric propulsion. Yuri Kondratyuk (another, lesser-known, Russian rocket pioneer) wrote “Concerning Other Possible Reactive Drives,” which examined electric propulsion, and pointed out the high power requirements for this type of system. He didn’t just examine electron acceleration, but also ion acceleration, noting that the heavier particles provide greater thrust (in the same paper he may have designed a nascent colloid thruster, another type of electric propulsion).

 

 

 

 

Hermann_Oberth_1950s
Hermann Oberth, image via Wikimedia

Another of the first generation of rocket pioneers to look at the possibilities of electric propulsion was Hermann Oberth. His 1929 opus, “Ways to Spaceflight,” devoted an entire chapter to electric propulsion. Not only did he examine electrostatic thrusters, but he looked at the practicalities of a fully electric-powered spacecraft.

 

 

 

 

 

200px-Glushko_Valentin_Petrovich
Valentin Glushko, image via Wikimedia

Finally, we come to Valentin Glushko, another early Russian rocketry pioneer, and giant of the Soviet rocketry program. In 1929, he actually built an electric thruster (an electrothermal system, which vaporized fine wires to produce superheated particles), although this particular concept never flew.By this time, it was clear that much more work had to be done in many fields for electric propulsion to be used; and so, one by one, these early visionaries turned their attention to chemical rockets, while electric propulsion sat on the dusty shelves of spaceflight concepts that had yet to be realized. It collected dust next to centrifugal artificial gravity, solar sails, and other practical ideas that didn’t have the ability to be realized for decades.

The First Wave of Electric Propulsion

Electric propulsion began to be investigated after WWII, both in the US and in the USSR, but it would be another 19 years of development before a flight system was introduced. The two countries both focused on one general type of electric propulsion, the electrostatic thruster, but they looked at different types of this thruster, reflecting the technical capabilities and priorities of each country. The US focused on what is now known as a gridded ion thruster, most commonly called an ion drive, while the USSR focused on the Hall effect thruster, which uses a magnetic field perpendicular to the current direction to accelerate particles. Both of these concepts will be examined more in the section on electrostatic thrusters; though, for now it’s worth noting that the design differences in these concepts led to two very different systems, and two very different conceptions of how electric propulsion would be used in the early days of spaceflight.

In the US, the most vigorous early proponent of electric propulsion was Ernst Stuhlinger, who was the project manager for many of the earliest electric propulsion experiments. He was inspired by the work of Oberth, and encouraged by von Braun to pursue this area, especially now that being able to get into space to test and utilize this type of propulsion was soon to be at hand. His leadership and designs had a lasting impact on the US electric propulsion program, and can still be seen today.

sert1
SERT-I thruster, image courtesy NASA

The first spacecraft to be propelled using electric propulsion was the SERT-I spacecraft, a follow on to a suborbital test (Program 661A, Test A, first of three suborbital tests for the USAF) of the ion drives that would be used. These drive system used cesium and mercury as a propellant, rather than the inert gasses that are commonly used today. The reason for this is that these metals both have very low ionization energy, and a reasonably favorable mass for providing more significant thrust. Tungsten buttons were used in the place of the grids used in modern ion drives, and a tantalum wire was used to neutralize the ion stream. Unfortunately, the cesium engine short circuited, but the mercury system was tested for 31 minutes and 53 cycles of the engine. This not only demonstrated ion propulsion in principle, but just as importantly demonstrated ion beam neutralization. This is important for most electric propulsion systems, because this prevents the spacecraft from becoming negatively charged, and possibly even attracting the ion stream back to the spacecraft, robbing it of thrust and contaminating sensors on board (which was a common problem in early electric propulsion systems).

The SNAPSHOT program, which launched the SNAP 10A nuclear reactor on April 3, 1965, also had a cesium ion engine as a secondary experimental payload. The failure of the electrical bus prevented this from being operated, but SNAPSHOT could be considered the first nuclear electric spacecraft in history (if unsuccessful).

ATS.jpg
ATS (either 4 or 5), image courtesy NASA

The ATS program continued to develop the cesium thrusters from 1968 through 1970. The ATS-4 flight was the first demonstration of an orbital spacecraft with electric propulsion, but sadly there were problems with beam neutralization in the drive systems, indicating more work needed to be done. ATS-5 was a geostationary satellite meant to have electrically powered stationkeeping, but was not able to despin the satellite from launch, meaning that the thruster couldn’t be used for propulsion (the emission chamber was flooded with unionized propellant), although it was used as a neutral plasma source for experimentation. ATS-6 was a similar design, and successfully operated for a total of over 90 hours (one failed early due to a similar emission chamber flooding issue). SERT-II and SCATHA satellites continued to demonstrate improvements as well, using both cesium and mercury ion devices (SCATHA wasn’t optimized as a drive system, but used similar components to test spacecraft charge neutralization techniques).

These tests in the 1960s never developed into an operational satellite that used ion propulsion for another thirty years. Challenges with the aforementioned thrusters becoming saturated, spacecraft contamination issues due to highly reactive cesium and mercury propellants, and relatively low engine lifetimes (due to erosion of the screens used for this type of ion thruster) didn’t offer a large amount of promise for mission planners. The high (2000+ s) specific impulse was very promising for interplanetary spacecraft, but the low reliability, and reasonably short lifetimes, of these early ion drives made them unreliable, or of marginal use, for mission planners. Ground testing of various concepts continued in the US, but additional flight missions were rare until the end of the 1990s. This likely helped feed the idea that electric propulsion is new and futuristic, rather than having its’ conceptual roots reaching all the way back to the dawn of the age of flight.

Early Electric Propulsion in the USSR

Unlike in the US, the USSR started development of electric propulsion early, and continued its development almost continuously to the modern day. Sergei Korolev’s OKB-1 was tasked, from the beginning of the space race, with developing a wide range of technologies, including nuclear powered spacecraft and the development of electric propulsion.

Early USSR TAL, Kim et al
Early sketch of a Hall effect (TAL) thruster in USSR, image from Kim et al

Part of this may be the different architecture that the Soviet engineers used: rather than having ions be accelerated toward a pair of charged grids, the Soviet designs used a stream of ionized gas, with a perpendicular magnetic field to accelerate the ions. This is the Hall effect thruster, which has several advantages over the gridded ion thruster, including simplicity, fewer problems with erosion, as well as higher thrust (admittedly, at the cost of specific impulse). Other designs, including the PPT, or pulsed plasma thruster, were also experimented with (the ZOND-2 spacecraft carried a PPT system). However, due to the rapidly growing Soviet mastery of plasma physics, the Hall effect thruster became a very attractive system.

There are two main types of Hall thruster that were experimented with: the stationary plasma thruster (SPT) and the thruster with anode layer (TAL), which refer to how the electric charge is produced, the behavior of the plasma, and the path that the current follows through the thruster. The TAL was developed in 1957 by Askold Zharinov, and proven in 1958-1961, but a prototype wasn’t built until 1967 (using cesium, bismuth, cadmium, and xenon propellants, with isp of up to 8000 s), and it wasn’t published in open literature until 1973. This thruster can be characterized by a narrow acceleration zone, meaning it can be more compact.

E1 SPT Thruster, Kim et al
E1 SPT-type Hall thruster, image via Kim et al

The SPT, on the other hand, can be larger, and is the most common form of Hall thruster used today. Complications in the plasma dynamics of this system meant that it took longer to develop, but its’ greater electrical efficiency and thrust mean that it’s a more attractive choice for station-keeping thrusters. Research began in 1962, under Alexy Morozov at the Institute of Atomic Energy; and was later moved to the Moscow Aviation institute, and then again to what became known as FDB Fakel (now Fakel Industries, still a major producer of Hall thrusters). The first breadboard thruster was built in 1968, and flew in 1970. It was then used for the Meteor series of weather satellites for attitude control. Development continued on the design until today, but these types of thrusters weren’t widely used, despite their higher thrust and lack of spacecraft contamination (unlike similar vintage American designs).

It would be a mistake to think that only the US and the USSR were working on these concepts, though. Germany also had a diversity of programs. Arcjet thrusters, as well as magnetoplasmadynamic thrusters, were researched by the predecessors of the DLR. This work was inherited by the University of Stuttgart Institute for Space Systems, which remains a major research institution for electric propulsion in many forms. France, on the other hand, focused on the Hall effect thruster, which provides lower specific impulse, but more thrust. The Japanese program tended to focus on microwave frequency ion thrusters, which later provided the main means of propulsion for the Hayabusa sample return mission (more on that below).

The Birth of Modern Electric Propulsion

ds1logo
DS1 Mission Patch, Image courtesy JPL

For many people, electric propulsion was an unknown until 1998, when NASA launched the Deep Space 1 mission. DS1 was a technology demonstration mission, part of the New Millennium program of advanced technology testing and experimentation. A wide array of technologies were to be tested in space, after extensive ground testing; but, for the purposes of Beyond NERVA, the most important of these new concepts was the first operational ion drive, the NASA Solar Technology Applications Readiness thruster (NSTAR). As is typical of many modern NASA programs, DS1 far exceeded the minimum requirements. Originally meant to do a flyby of the asteroid 9969 Braille, the mission was extended twice: first for a transit to the comet 19/P Borrelly, and later to extend engineering testing of the spacecraft.

nstar
NSTAR thruster, image courtesy NASA

In many ways, NSTAR was a departure from most of the flight-tested American electric propulsion designs. The biggest difference was with the propellant used: cesium and mercury were easy to ionize, but a combination of problems with neutralizing the propellant stream, and the resultant contamination of the spacecraft and its’ sensors (as well as minimizing chemical reaction complications and growing conservatism concerning toxic component minimization in spacecraft), led to the decision to use noble gasses, in this case xenon. This, though, doesn’t mean that it was a great overall departure from the gridded ion drives of US development; it was an evolution, not a revolution, in propulsion technology. Despite an early (4.5 hour) failure of the NSTAR thruster, it was able to be restarted, and the overall thruster life was 8,200 hours, and the backup achieved more than 500 hours beyond that.

Not only that, but this was not the only use of this thruster. The Dawn mission to the minor planet Ceres uses an NSTAR thruster, and is still in operation around that body, sending back incredibly detailed and fascinating information about hydrocarbon content in the asteroid belt, water content, and many other exciting discoveries for when humanity begins to mine the asteroid belt.

Many satellites, especially geostationary satellites, use electric propulsion today, for stationkeeping and even for final orbital insertion. The low thrust of these systems is not a major detriment, since they can be used over long periods of time to ensure a stable orbital path; and the small amount of propellant required allows for larger payloads or longer mission lifetimes with the same mass of propellant.

After decades of being considered impractical, immature, or unreliable, electric propulsion has come out of the cold. Many designs for interplanetary spacecraft use electric propulsion, due to their high specific impulse and ability to maximize the benefits of the high-isp, low-thrust propulsion regime that these thruster systems excel at.

GT arcjet small.PNG
Electrothermal arcjet, image courtest Georgia Tech

Another type of electric thruster is also becoming popular for small-sat users: electrothermal thrusters, which offer higher thrust from chemically inert propellants in compact forms, at the cost of specific impulse. These thrusters offer the benefits of high-thrust chemical propulsion in a more compact and chemically inert form – a major requirement for most smallsats which are secondary payloads that have to demonstrate that they won’t threaten the primary payload.

So, now that we’ve looked into how we’ve gotten to this point, let’s see what the different possibilities are, and what is used today.

What are the Options?

Ion drive scematic, NASA
Ion drive schematic, image courtesy NASA

The most well-known and popularized version of electric propulsion is electrostatic propulsion, which uses an ionization chamber (or ionic fluid) to develop a positively charged stream of ions, which are then accelerated out the “nozzle” of the thruster. A stream of electrons is added to the propellant as it leaves the spacecraft, to prevent the buildup of a negative charge. There are many different variations of this concept, including the best known types of thrusters (the Hall effect and gridded ion thrusters), as well as field effect thrusters and electrospray thrusters.

NASA MPD concept
MPD Thruster concept, image courtesy NASA

The next most common version – and one with a large amount of popular mentions these days – is the electromagnetic thruster. Here, the propellant is converted to a relatively dense plasma, and usually (but not always) magnets are used to accelerate this plasma at high speed out of a magnetic nozzle using the electromagnetic and thermal properties of plasma physics. In the cases that the plasma isn’t accelerated using magnetic fields directly, magnetic nozzles and other plasma shaping functions are used to constrict or expand the plasma flow. There are many different versions, from magnetohydrodynamic thrusters (MHD, where a charge isn’t transferred into the plasma from the magnetic field), to the less-well-known magnetoplasmadynamic (MPD, where the Lorentz force is used to at least partially accelerate the plasma), electrodeless plasma, and pulsed inductive thruster (PIT).

GT arcjet small
Electrothermal arcjet, image courtesy Georgia Tech

Thirdly, we have electrothermal drive systems, basically highly advanced electric heaters used to heat a propellant. These tend to be the less energy efficient, but high thrust, systems (although, theoretically, some versions of electromagnetic thrusters can achieve high thrust as well). The most common types of electrothermal systems proposed have been arcjet, resistojet, and inductive heating drives; the first two actually being popular choices for reaction control systems for large, nuclear-powered space stations. Inductive heating has already made a number of appearances on this page, both in testing apparatus (CFEET and NTREES are both inductively heated), and as part of a bimodal NTR (the nuclear thermal electric rocket, or NTER, covered on our NTR page).

VASIMR sketch, Ad Astra
VASIMR operating principles diagram, image courtesy Ad Astra

The last two systems, MHD and electrothermal, often use similar mechanisms of operation when you look at the details, and the line between the two isn’t necessarily clear. For instance, some authors describe the pulsed plasma thruster (PPT), which most commonly uses a solid propellant such as PTFE (Teflon) as a propellant, which is vaporized and occasionally ionized electrically before it’s accelerated out of the spacecraft, as an MHD, while others describe it as an arcjet, and which term best applies depends on the particulars of the system in question. A more famous example of this gray area would be the VASIMR thruster, (VAriable Specific Impulse through Magnetic Resonance). This system uses dense plasma, contained in a magnetic field, but the plasma is inductively heated using RF energy, and then accelerated due to the thermal behavior of the plasma while being contained magnetically. Because of this, the system can be seen as an MHD thruster, or as an electrothermal thruster (that debate, and the way these terms are used, was one of the more enjoyable parts of the editing process of this blog post, and I’m sure one that will continue as we continue to examine EP).

Finally, we come to the photon drives. These use photons as the reaction mass – and as such, are sometimes somewhat jokingly called flashlight drives. They have the lowest thrust of any of these systems, but the exhaust velocity is literally the speed of light, so they have insanely high specific impulse. Just… don’t expect any sort of significant acceleration, getting up to speed with these systems could take decades, if not centuries; making them popular choices for interstellar systems, rather than interplanetary ones. Photonic drives have another option, as well, though: the power source for the photons doesn’t need to be on board the spacecraft at all! This is the principle behind the lightsail (the best-known version being the solar sail): a fixed installation can produce a laser, or other stream of photons (such as a maser, out of microwaves, in the Starwisp concept), which then impact a reflective surface to provide thrust. This type of system follows a different set of rules and limitations, however, from systems where the power supply (and associated equipment), drive system, and any propellant needed are on-board the spacecraft, so we won’t go too much into depth on that concept initially, instead focusing on designs that have everything on-board the spacecraft.

Each of these systems has its’ advantages and disadvantages. Electrostatic thrusters are very simple to build: ionization chambers are easy, and creating a charged field is easy as well; but to get it to work there has to be something generating that charge, and whatever that something is will be hit by the ionized particles used for propellant, causing erosion. Plasmadynamic thrusters can provide incredible flexibility, but generally require large power plants; and reducing the power requirements requires superconducting magnets and other materials challenges. In addition, plasma physics, while becoming increasingly well known, provides a unique set of challenges. Thermoelectric thrusters are simple, but generally provide poor specific impulse, and thermal cycling of the components causes wear. Finally, photon drives are incredibly efficient but very, very low thrust systems, requiring exceedingly long burn times to produce any noticeable thrust. Let’s look at each of these options in a bit more detail, and look at the practical limitations that each system has.

Optimizing the System: The Fiddly Bits

As we’ve seen, there’s a huge array of technologies that fall under the umbrella of “electric propulsion,” each with their advantages and disadvantages. The mission that is going to be performed is going to determine which types of thrusters are feasible or not, depending on a number of factors. If the mission is stationkeeping for a geosynchronous communications satellite, then the Hall thruster has a wonderful balance between thrust and specific impulse. If the mission is a sample return mission to an asteroid, then the lower thrust, higher specific impulse gridded ion thruster is better, because the longer mission time (and greater overall delta-v needed for the mission) make this low-thrust, high-efficiency thruster a far more ideal option. If the mission is stationkeeping on a small satellite that is a piggyback load, the arcjet may be the best option, due to its’ compactness, the chemically inert nature of the fuel, and relatively high thrust. If higher thrust is needed over a longer period for a larger spacecraft, MPD may be the best bet. Very few systems are designed to deal with a wide range of capabilities in spaceflight, and electric propulsion is no different.

There are other key concepts to consider in the selection of an electric propulsion system as well. The first is the efficiency of this system: how much electricity is required for the thruster, compared to how much energy is imparted onto the spacecraft in the form of the propellant. This efficiency will vary within each different specific design, and its’ improvement is a major goal in every thruster’s development process. The quality of electrical power needed is also an important consideration: some require direct, current, some require alternating current, some require RF or microwave power inputs, and matching the electricity produced to the thruster itself is a necessary step, which on occasion can make one thruster more attractive than another by reducing the overall mass of the system. Another key question is the total amount of change in velocity needed for the mission, and the timeframe over which this delta-v can be applied; in this case, the longer timeframe you have, the more efficient your thruster can be at lower thrust (trading thrust for specific impulse).

However, looking past just the drive itself, there are quite a few things about the spacecraft itself, and the power supply, that also have to be considered. The first consideration is the power supply available to the drive system. If you’ve got an incredibly efficient drive system that requires a MW to run, then you’re going to be severely limited in your power supply options (there are very few, if any, drive systems that require this high a charge). For more realistic systems, the mass of the power supply, and therefore of the spacecraft, is going to have a direct impact on the amount of delta-v that is able to be applied over a given time: if you want your spacecraft to be able to, say maneuver out of the way of a piece of space debris, or a mission to another planet needs to arrive within a given timeframe, the less mass for a given unit of power, the better. This is an area where nuclear power can offer real benefits: while it’s debatable whether solar or nuclear power is better for low-powered applications in terms of power per unit mass, which is known in engineering as specific power. Once higher power levels are needed, however, nuclear shines: it can be difficult (but is far from impossible) to scale nuclear down in size and power output, but it scales up very easily and efficiently, and this scaling is non-linear. A smaller output reactor and one that has 3 times the output could be very similar in terms of core size, and the power conversion systems used also often have similar scaling advantages. There are additional advantages, as well: radiators are generally speaking smaller in sail area, and harder to damage, than photovoltaic cells, and can often be repaired more easily (once a PV cell get hit with space debris, it needs to be replaced, but a radiator tube designed to be repaired can in many cases just be patched or welded and continue functioning). This concept is known as power density, or power-per-unit-volume, and also has a significant impact on the capabilities of many (especially large) spacecraft. The specific volume of the power supply is going to be a limiting factor when it comes to launching the vehicle itself, since it has to fit into the payload fairing of the launch vehicle (or the satellite bus of the satellite that will use it).

The specific power, on the other hand, has quite a few different implications, most importantly in the available payload mass fraction of the spacecraft itself. Without a payload, of whatever type is needed, either scientific missions or crew life support and habitation modules, then there’s no point to the mission, and the specific power of the entire power and propulsion unit has a large impact on the amount of mass that is able to be brought on the mission.

Another factor to consider when designing an electrically propelled spacecraft is how the capabilities and limitations of the entire power and propulsion unit interact with the spacecraft itself. Just as in chemical and thermal rockets, the ratio of wet (or fueled) to dry (unfueled) mass has a direct impact on the vehicle’s capabilities: Tsiolkovsky’s rocket equation still applies, and in long missions there can be a significant mass of propellant on-board, despite the high isp of most of these thrusters. The specific mass of the power and propulsion system will have a huge impact on this, so the more power-dense, and more mass-efficient you are when converting your electricity into useful power for your thruster, the more capable the spacecraft will be.

Finally, the overall energy budget for the mission needs to be accounted for: how much change in velocity, or delta-v, is needed for the mission, and over what time period this change in velocity can be applied, are perhaps the biggest factors in selecting one type of thruster over another. We’ve already discussed the relative advantages and disadvantages of many of the different types of thrusters earlier, so we won’t examine it in detail again, but this consideration needs to be taken into account for any designed spacecraft.

With each of these factors applied appropriately, it’s possible to create a mathematical description of the spacecraft’s capabilities, and match it to a given mission profile, or (as is more common) to go the other way and design a spacecraft’s basic design parameters for a specific mission. After all, a spacecraft designed to deliver 100 kg of science payload to Jupiter in two years is going to have a very different design than one that’s designed to carry 100 kg to the Moon in two weeks, due to the huge differences in mission profile. The math itself isn’t that difficult, but for now we’ll stick with the general concepts, rather than going into the numbers (there are a number of dimensionless variables in the equations, and for a lot of people that becomes confusing to understand).

Let’s look instead at some of the more important parts of the power and propulsion unit that are tied more directly to the drives themselves.

Just as in any electrical system, you can’t just hook wires up to a battery, solar panel, or power conversion system and feed it into the thruster, the electricity needs to be conditioned first. This ensures the correct type of current (alternating or direct), the correct amount of current, the correct amperage… all the things that are done on Earth multiple times in our power grid have to be done on-board the spacecraft as well, and this is one of the biggest factors when it comes to what specific drive is placed on a particular satellite.

After the electricity is generated, it goes through a number of control systems to first ensure protection for the spacecraft from things like power surges and inappropriate routing, and then goes to a system to actually distribute the power, not just to the thruster, but to the rest of the on-board electrical systems. Each of these requires different levels of power, and as such there’s a complex series of systems to distribute and manage this power. If electric storage is used, for instance for a solar powered satellite, this is also where that energy is tapped off and used to charge the batteries (with the appropriate voltage and battery charge management capability).

After the electricity needed for other systems has been rerouted, it is directed into a system to ensure that the correct amount and type (AC, DC, necessary voltage, etc) of electricity is delivered to the thruster. These power conditioning units, or PCUs, are some of the most complex systems in an electric propulsion systems, and have to be highly reliable. Power fluctuations will affect the functioning of a thruster (possibly even forcing it to shut down in the case of too low a current), and in extreme cases can even damage a thruster, so this is a key function that must be provided by these systems. Due to this, some designers of electrical drive systems don’t design those systems in-house, instead selling the thruster alone, and the customer must contract or design the PCU independently of the supplier (although obviously with the supplier’s support).

Finally, the thermal load on the thruster itself needs to be managed. In many cases, small enough thermal loads on the thruster mean that radiation, or thermal convection through the propellant stream, is sufficient for managing this, but for high-powered systems, an additional waste heat removal system may be necessary. If this is the case, then it’s an additional system that needs to be designed and integrated into the system, and the amount of heat generated will play a major factor in the types of heat rejection used.

There’s a lot more than just these factors to consider when integrating an electric propulsion system into a spacecraft, but it tends to get fairly esoteric fairly quickly, and the best way to understand it is to look at the relevant mathematical functions for a better understanding. Up until this point, I’ve managed to avoid using the equations behind these concepts, because for many people it’s easier to grasp the concepts without the numbers. This will change in the future (as part of the web pages associated with these blog posts), but for now I’m going to continue to try and leave the math out of the posts themselves.

Conclusions, and Upcoming Posts

As we’ve seen, electric propulsion is a huge area of research and design, and one that extends all the way back to the dawn of rocketry. Despite a slow start, research has continued more or less continuously across the world in a wide range of different types of electric propulsion.

We also saw that the term “electric propulsion” is very vague, with a huge range of capabilities and limitations for each system. I was hoping to do a brief look at each type of electric propulsion in this post (but longer than a paragraph or two each), but sadly I discovered that just covering the general concepts, history, and integration of electric propulsion was already a longer-than-average blog post. So, instead, we got a brief glimpse into the most general basics of electrothermal, electrostatic, magnetoplasmadynamic, and photonic thrusters, with a lot more to come in the coming posts.

Finally, we looked at the challenges of integrating an electric propulsion system into a spacecraft, and some of the implications for the very wide range of capabilities and limitations that this drive concept offers. This is an area that will be expanded a lot as well, since we barely scratched the surface. We also briefly looked at the other electrical systems that a spacecraft has in between the power conversion system and the thruster itself, and some of the challenges associated with using electricity as your main propulsion system.

Our next post will look at two similar in concept, but different in mechanics, designs for electric propulsion: electrothermal and magnetoplasmadynamic thrusters. I’ve already written most of the electrothermal side, and have a good friend who’s far better than I at MPD, so hopefully that one will be coming soon.

The post after that will focus on electrostatic thrusters. Due to the fact that these are some of the most widely used, and also some of the most diverse in the mechanisms used, this may end up being its’ own post, but at this point I’m planning on also covering photon drive systems (mostly on-board but also lightsail-based concepts) in that post as well to wrap up our discussion on the particulars of electric propulsion.

Once we’ve finished our look at the different drive systems, we’ll look at how these systems don’t have to be standalone concepts. Many designs for crewed spacecraft integrate both thermal and electric nuclear propulsion into a single propulsion stage, bimodal nuclear thermal rockets. We’ll examine two different design concepts, one American (the Copernicus-B), and one Russian (the TEM stage), in that post, and look at the relative advantages and disadvantages of each concept.

I would like to acknowledge the huge amount of help that Roland Antonius Gabrielli of the University of Stuttgart Institute for Space Studies has been in this post, and the ones to follow. His knowledge of these topics has made this a far better post than it would have been without his invaluable input.

As ever, I hope you’ve enjoyed the post. Feel free to leave a comment below, and join our Facebook group to join in the discussion!

References:

History

A Critical History of Electric Propulsion: The First Fifty Years, Choueiri Princeton 2004 http://mae.princeton.edu/sites/default/files/ChoueiriHistJPC04.pdf

A Method and Means of Producing Jets of Electrified Gas, US Patent 1363037A, Goddard 1917 https://patents.google.com/patent/US1363037A/en

A Synopsis of Ion Propulsion Development Projects in the United States: SERT I to Deep Space 1, Sovey et al, NASA Glenn Research Center 1999 https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19990116725.pdf

History of the Hall Thruster’s Development in the USSR, Kim et al 2007 http://erps.spacegrant.org/uploads/images/images/iepc_articledownload_1988-2007/2007index/IEPC-2007-142.pdf

NSTAR Technology Validation, Brophy et al 2000 https://trs.jpl.nasa.gov/handle/2014/13884

Review Papers for Electric Propulsion

Electric Propulsion: Which One for my Spacecraft? Jordan 2000 http://www.stsci.edu/~jordan/other/electric_propulsion_3.pdf

Electric Propulsion, Jahn and Choueiri, Princeton University 2002 https://alfven.princeton.edu/publications/ep-encyclopedia-2001

Spacecraft Optimization

Joint Optimization of the Trajectory and the Main Parameters of an Electric Propulsion Spacecraft, Petukhov et al 2017 https://reader.elsevier.com/reader/sd/D49CFC08B1988AA61C8107737D614C89A86DB8DAE56D09D3E8E60C552C9566ABCBB8497CF9D0CDCFB9773815820C7678

Power Sources and Systems of Satellites and Deep Space Probes (slideshow), Farkas ESA http://www.ujfi.fei.stuba.sk/esa/slidy_prezentacie/power_sources_and_systems_of_satellites_and_deep_space_probes_mk_2.pdf