The Woodward Effect, Mach’s Principle, and Carver Mead’s G4v

Curtis Horn’s case that G4v can derive the Woodward Effect—and why that would matter

The Woodward Effect has always lived in a strange limbo: intriguing lab claims, fierce skepticism, and a theory story that never quite felt like it had a single, clean home. In a new paper and a recent presentation, physicist Curtis Horn argues he’s found that home in Carver Mead’s “Gravitation with a 4-Vector Potential” (G4v) theory—an explicitly Machian framework where inertia isn’t a built-in given, but something the universe “supplies.” Horn argues the Woodward Effect can be derived from Mach’s principle inside Carver Mead’s G4v, turning a disputed propulsion claim into a tighter, more testable story. If he’s right, the Woodward Effect stops being an ad-hoc patchwork and becomes a specific, testable consequence of a coherent field theory—with a key numerical difference that could, in principle, show up in experiments.

Carver Mead’s G4v: An engineering theory with big ambition

Curtis Horn frames his new paper, “The Woodward Transient Mass Effect as a Consequence of G4v Machian Gravitation”, as a bridge between two worlds: Dr. James F. Woodward’s decades-long effort to connect inertia to Mach’s principle—and Horn’s own more comprehensive (and, by his description, more complex) covariant theory. In his talk, he’s candid that he hasn’t yet fully shown the Woodward Effect emerging from that larger framework, and that his new paper is the practical move that gets the discussion back onto firmer ground sooner.

In Horn’s view, Carver Mead’s “Gravitation with a 4-Vector Potential” (G4v) theory is an “engineering-first” approach to reactionless propulsion: not a retreat from deep theory, but a way to pin the conversation to a single coherent framework that can make quantitative predictions. The tone is less “trust me” and more “let’s put the derivation where it can be checked,” including by people who don’t share his assumptions.

There’s also a clear emotional undertone: continuation. Horn describes the manuscript as ready for peer review—something he wants to circulate for feedback, publish, and use to help keep Woodward’s legacy moving forward, alongside efforts to see whether Woodward’s own benchmark work can be published via his estate.

And then there’s the motive force behind the choice of G4v: Horn calls Mead’s framework “physically sensible” and “engineering sensible”—the sort of theory that (in his view) treats inertia properly and unifies it with electromagnetism in one consistent structure, rather than forcing researchers to “grab” equations from different places to make the story work. Underneath that is a simple insistence: whatever this turns out to be, you don’t get to “cheat” the physics—you have to meet it on its own terms.

The Woodward Effect, in plain language

At the center of Woodward’s claim is a radical idea that’s easy to say and hard to prove: inertia might not be fixed. Under the right conditions, a system’s effective mass could fluctuate—briefly and periodically—when its internal energy is driven hard enough and fast enough.

In the common “MEGA” configuration, the hardware is almost disarmingly simple: a piezoelectric stack is driven electrically so it expands and contracts, while being sandwiched between asymmetric masses (one heavier, one lighter). Horn describes this asymmetry as essential, because it’s what lets you “rectify” an oscillation—turning something that sloshes back and forth into something that produces a net push.

In his paper, Horn emphasizes the transient nature of the effect: the mass fluctuation is not a static “mass change,” but something that tracks how quickly the device’s power is changing in time. In other words: it’s not “how much energy is in the system,” but “how fast that energy is being pumped in and out.”

This is also where most of the controversy lives. Even sympathetic readers often get stuck on a brutal scaling intuition: gravity is weak, and any effect that depends on coupling a tabletop device to the cosmos sounds like it should be hopelessly tiny. Horn’s story is, in part, an attempt to show why that intuition may be misleading—but only if you start inside a theory that is Machian from the ground up.

Why Carver Mead’s G4v is the “home” Horn wants

Horn describes G4v as the kind of framework that makes his derivation possible without patchwork. In his talk, he highlights two features: first, it treats inertia and electromagnetism as part of one coherent field picture; and second, it is explicitly Machian—built to make “the rest of the universe” part of the physics of inertia.

“People don’t understand necessarily the physical reasoning and the experimental evidence behind Mach’s Principle—it’s not something you can just ignore or set aside.” — Curtis Horn

On the page, Horn’s argument is similar but sharper. He says G4v is one of the few advanced field theories that incorporates the relational/inertial Mach principle directly—and that if you want to analyze an effect that postulates fluctuations in inertia itself, you need a theory that derives inertia from coupling to the cosmic matter distribution, not one that treats inertia as intrinsic by default.

G4v’s core ingredients, as Horn summarizes them, are unusual but concrete: a gravitation theory expressed with a four-vector potential; a scalar gravitational potential that can be interpreted as a locally varying effective light-speed; and an implementation of Mach’s principle where inertia is not fundamental, but emerges from the scalar potential and its relationship to the rest of the universe.

Horn also acknowledges the controversy head-on: variable-speed-of-light approaches are not fashionable, and he notes Einstein dropped such ideas even though general relativity captures some Mach-like behavior. Horn’s claim is that general relativity doesn’t implement a strong Mach principle “fully,” which is why you won’t naturally find a clean Woodward Effect derivation there.

Mach’s principle in practice: two channels, one effect

Horn’s paper is organized around a simple but powerful move: derive the Woodward Effect through two distinct channels, each with different “suppression” characteristics—one scalar (mass fluctuation) and one vector (momentum/thrust coupling).

“There’s one theory, one set of equations, that makes it possible to actually derive the Woodward effect without just grabbing from random, random places.” — Curtis Horn

The scalar channel: where the mass fluctuation comes from

Horn argues that one part of the derivation lives in the scalar side of G4v. In plain terms, the claim is that rapidly changing internal energy density can produce a transient “inertial response” that looks like a small mass fluctuation. The faster the energy density changes, the more relevant this transient term becomes—because it’s a dynamic effect, not a static one.

A crucial part of Horn’s argument is that you can’t treat this as “my device sources a tiny gravitational field and that’s the whole story.” In a Machian theory, the scalar potential is set primarily by a cosmic contribution—by the integrated mass-energy of the universe—so the local device isn’t the dominant source of the background it’s interacting with.

That reframing is what he believes avoids the “hopelessly tiny” conclusion. The math story is: if you ask a purely local sourcing question, you get Planck-scale suppression; if you ask the Machian question—how a time-varying energy density couples into an already-existing cosmic potential—the scaling can change dramatically.

Most importantly for propulsion, Horn keeps the effect transient all the way through: the putative mass fluctuation tracks how quickly power is changing in time, and net thrust arises only when that oscillating fluctuation is rectified (in MEGA devices, by asymmetric masses).

The vector channel: how a fluctuation becomes thrust

Horn’s second channel is the conversion step—the part that turns “a transient fluctuation exists” into “a device might push.” Here the key idea is that in G4v, momentum bookkeeping includes an additional Machian coupling term that depends on the cosmic matter distribution.

In other words: inertia and momentum aren’t only local properties; they’re partly relational. Horn’s claim is that this is where a device can “couple” to the universe in a way that can be order-unity in strength—not “Planck suppressed”—because the coupling is tied to gravitational constants and the cosmic potential rather than to a tiny quantum-gravity correction.

And this is where his story becomes propulsion-relevant: even if the scalar channel determines how large the mass fluctuation is, the vector channel provides a robust way to convert whatever fluctuation exists into thrust—provided you have rectification.

Planck suppression, and the G4v escape hatch

If you want to understand why Horn is excited, this is the crux: he thinks he’s found the point where the usual “it must be Planck-tiny” objection stops being decisive.

Horn walks through the local linearized view: if you treat your device as creating a small perturbation on top of a cosmic gravitational background, the coupling looks like it comes in through a quantum-gravity scale term, and the whole thing appears to collapse into an effect so small it can’t matter.

And then he argues—explicitly—that this isn’t “wrong mathematics,” but it misses the physical point of G4v, because G4v is Machian: the scalar potential is not primarily set by local perturbations, but by the universe. Once the universe is the dominant “background partner,” the relevant question is no longer “how big is my device’s field,” but “how does a time-varying energy density interact with a cosmic potential that already exists?”

On the vector side, he makes the case even more directly: the thrust-relevant coupling does not depend on a quantum-gravity correction. In his telling, the “quantum-looking” constants that would normally scare you off cancel out, leaving a coupling that can remain order-unity and tied to the Machian potential itself.

That doesn’t magically solve the whole problem—Horn is clear that the scalar channel still sets the magnitude—but it draws a bright line between “how do we get a fluctuation?” and “how does a fluctuation become thrust?” In his framing, the first is the hard part; the second now has a clean lever.

A testable prediction: the “Mach factor” in G4v is smaller than unity

There’s a reason Horn keeps coming back to one specific number: it turns philosophy into a target.

In his G4v treatment, Horn says the Mach normalization factor—the thing that quantifies how strongly the cosmic potential contributes—comes out to a value around three-quarters rather than exactly one. He contrasts that with older Woodward/Sciama-style usage where the comparable factor is effectively treated as unity, and he describes it as roughly a “quarter difference,” predicting slightly less impulse under the G4v derivation.

This is one of the most story-friendly pieces of the whole argument. “Does it thrust?” is a messy question in a world of micro-Newton stands, thermal drifts, cable forces, EM coupling, and vibration. But “does it scale with the coefficient your theory predicts?” is a cleaner kind of fight.

Horn also widens the lens: he argues the same underlying physics should apply to both MEGA devices (mechanical rectification) and MLT devices (electromagnetic rectification), and suggests that different architectures showing consistent behavior would count against device-specific artifacts.

Why it matters: propulsion implications of the Woodward Effect

Horn’s excitement isn’t only philosophical. He’s chasing something engineers can iterate on.

He emphasizes the practical value of having a framework that gives real predictions and lets you model real materials in tools like COMSOL Multiphysics—rather than living in abstractions that don’t cash out in the lab. He even mentions straightforward design ambitions like changing mass materials (for example, brass to tungsten) to explore performance.

He also points to a key constraint: coherence. Not quantum coherence, he clarifies—material coherence. The piezo stack matters because it moves coherently as a structured material, and that coherence can break down when heating rises. In their lab experience, he says the effect fades as the stack heats up—something he claims is predicted by the theory.

“In the lab, we had seen that over time when the experimental stack heats up, the effect goes away, which is something that’s predicted by the G4V theory.” — Curtis Horn

That kind of statement, whether it turns out to be right or wrong, is exactly the sort of “knob” experimentalists want: if heat kills it, then thermal management isn’t a nuisance variable—it’s part of the causal chain. If coherence is required, then materials science becomes central to the test, not peripheral.

Finally, the MEGA/MLT equivalence claim adds an engineering roadmap: if mechanical rectification and electromagnetic rectification are just two ways of extracting net impulse from the same underlying transient mass-fluctuation plus Machian momentum exchange, then the design space is broader than a single piezo stack geometry.

So what would it mean if Horn is right?

Start with the careful version: it wouldn’t mean “reactionless thrust” in the naive sense. Horn’s entire framing depends on a momentum bookkeeping story where the “reaction partner” is the Machian inertial frame—the cosmic matter distribution—so momentum conservation isn’t thrown out; it’s moved into a different accounting system.

For propulsion, the implication is still huge: if you can reliably generate net impulse by rectifying transient mass fluctuations driven by electrical power, you’ve opened a door to thrust that doesn’t require expelling propellant in the conventional way. That doesn’t automatically give you starships—but even small, repeatable thrust-to-power in a stable architecture would change how people think about stationkeeping, deep-space logistics, and long-duration missions.

For physics, the implication is even more profound: a validated Machian inertia mechanism that can be manipulated in the lab would turn a century-old philosophical debate into an experimental program. Horn points to G4v’s treatment of inertia and its coupling of scalar and vector potentials to matter waves as the conceptual backbone, arguing this is less about inventing new physics than applying an existing framework to a transient regime.

And for the Woodward Effect itself—the most immediate implication is a kind of upgrade: from “a contested effect with an awkward derivation” to “a contested effect with a clear theoretical target.” Horn’s key numerical difference is a stake in the ground, the sort of detail that makes a claim falsifiable rather than purely rhetorical.

None of this resolves the empirical debate by itself. The Woodward Effect remains controversial, and the path forward is the boring, expensive one: replication, careful controls, scaling tests, and independent teams willing to publish null results as readily as positive ones. But Horn’s bet is that a theory with one coherent set of assumptions can finally make the experimental conversation less mystical and more mechanical: turn these knobs, predict these scalings, test this coefficient.

References