DIY Electric Car Forums banner
2,481 - 2,491 of 2,491 Posts
Re: Unlimited Mileage Electric Vehicles (Part 2)

Don't the node voltages in chart #6 indicate current out of the battery, -49 Amps across the .15R?
My mistake. I tuned it wrong. Thanks for pointing it out.

If we compare a test case, we see that the same current polarity and node voltages occur. So, I had the tuning coil set to high blocking the power coming from the left-hand side.

Yet, if I reduce the inductance of this coil to be more similar to all of the other versions of this circuit, then I get a transference across it resulting in an overwhelming influence upon the battery requiring some throttling by raising the resistance of the resistor at the top of this loop.

This is what I have to show, so far. I'm in the midst of making adjustments to fine tune it. I want to believe that all I have to do is: reduce the self-inductance of the tuning coil and increase the resistance of the resistor at the top of this loop to get exactly, or nearly so, the results I want to achieve. But it may take a whole day to see if I can achieve it since my computer is very slow. So, I thought I'd post these preliminary results so as to answer your question right away.

Thanks!
 

Attachments

Sorry for the delay. It took a while to iron out all the "kinks".

Increasing the self-induction of VC1 & VC2 decreases the gaps in the duty cycle of the 1.81 Ohm resistor (which represents the dead battery pack of 182.5 milli Ohms of resistance - derived from 24 count NiMH batteries as read from a chart given to me by Toyota of Carlsbad who inspected my defunct RAV4EV from 2002) plus a small resistor placed in series with these dead batteries to reduce the amperage and increase the voltage to just the right amount. But there is a cost to increasing the self-induction of VC1 & VC2. The cost is an increase of nodal voltage throughout the circuit. So, I settled on this configuration, instead.

I forgot to mention that it is common knowledge that resistance corrects power factor, ergo: puts the current and voltage wave components of electricity back together again. This is why reactive power heats up a circuit. So, other than bringing back steam locomotives running on reactive power, my approach has been to see if I can simulate a process whereby electricity is stretched almost to the point of complete breakdown so as to take advantage of the ease with which reactive power may be manipulated to increase its amplitude, and then put its pieces back together again through a resistive load, such as: a spark gap, an arc lamp, or a chemical resistive load such as a dead pack of batteries. Ossie Callanan spoke of this last possibility in his treatise entitled...

A Working Radiant Free Energy System...
http://fluxite.com/WorkingRadiantEnergy.pdf

and

https://vdocuments.site/working-radiant-energy.html

and

https://archive.org/details/workingradiantenergyossiecallanan

What he calls "radiant" I call reactive 'cuz I think they're equivalent terms for the same phenomenon. But that's my opinion.

The funny looking step-wise surges of the input wattage at the sine wave generator are due to momentary spikes of amperage occurring there which warp their RMS average. But I hold these spikes to be of little consequence since they're very quick and immediately fall back to their nominal level which is very small -- far less than their peaks.
 

Attachments

Reactance is our Friend

Someone's gonna wanna complain that "You can't get more from less", or worse: "You can't get something from nothing".

Well, to forestall their complaints, I'm gonna answer them right now and save them a bit of trouble going to the bother....

That last one is true. But I don't go there.

The first one is true for energy, but not true for reactance of energy. Here's why...

We use a couple of relations: the continuity of electricity (which is closely similar to conservation of energy) and current division.

The continuity of electricity is something we're familiar with: if frequency goes up, amplitude goes down. Or, if voltage goes up, current goes down. So that, overall, the entirety of electricity remains consistent with itself over time despite any changes to any of its particulars. So far, so good...

With current division, we know that if we add another branch load in parallel to any other branch loads, the current demanded of the lone voltage source goes up. In fact, we drain away the amp-hours of the voltage source that much faster with every additional parallel load of current branch added to a circuit supplied by a single voltage source. This is true for energy, but not true for reactance.

The opposite is true for reactance. Here's why...

As I cited in a previous post, reactance formulates a relationship among several factors: frequency, two times pi, and either capacitance paired with negative resistance or inductance paired with positive resistance. And negative resistance is derived from Mho's Law in which resistance divided by voltage gives negative current while positive resistance is derived from Ohm's Law in which voltage is divided by resistance giving a relationship with current which we are familiar with.

Hence, if I fly into a head-wind, this positive resistance slows me down. But if I fly a plane with a tail-wind, the opposite happens: I speed up. I suspect it's a little different with electricity in that it's not a tail-wind so much as it may be a vacuum appearing ahead of the current. So, I suspect there are two varieties of voltage: one related to pressure and positive resistance and another related to a vacuum and negative resistance.

Anyway...

The continuity of electricity demands a consistency to the overall result of reactance despite any changes to any of its individual factors. So, if frequency should go up, then resistance must go down, or else inductance or capacitance must reduce so that the total reactance is conserved. See how continuity is indelibly linked to conservation?

But this works in our favor if we are attempting to magnify electricity through a step-wise procedure of nearly splitting electricity (without splitting the atomic matter which is hosting electricity), increasing this lossless power, and then converting reactance back into usable electricity.

This is due to the law of continuity and its implication of conservation of all things! We get conservation to make possible the increase of energy OUT compared to what goes IN to a circuit! What a concept!!

In order to maintain continuity of reactance, if frequency should go up, then resistance must go down. If this is positive resistance inside a coil, then the consequence will be that current must go up. But since reactance is lossless due to its exclusive quality of recycling, more current is not drained from the source. Instead, more current recirculates in the circuit since it's not going anywhere, nor is it being drained from anywhere. It can't drain anything, because it's lossless. Only energy could drain a source. Reactance can't drain any voltage source of its amp-hours. All it can do is zip around like light beams bouncing around inside of a laser device.

So, the current keeps going up along with its frequency and the voltage will go down as a consequence of the lowering of resistance and also to keep consistent with the increase of current -- everything being conserved, overall.

Meanwhile...

In order to maintain continuity of reactance, if frequency should go up, then negative resistance must go down resulting in a rise of voltage (since a decrease of negative resistance is equivalent to an increase of positive resistance). If this is negative resistance inside a capacitor, then the consequence will be that current must go down and voltage must go up. But since reactance is lossless due to its recycling, less current is not drained from the source. Instead, less current recirculates in the circuit since it's not going anywhere, nor is it being drained from anywhere. Instead, voltage goes up with the increase of frequency.

I suspect a condition of reactant inductance occurs within each self-looped set of two or more current coils since their current goes up while their voltage goes down due to their lack of windings giving far less surface area and less capacitance among their windings allowing their weak self-induction to flourish without being superseded by what would otherwise have been the capacitance of a massively wound coil.

And I also suspect a condition of capacitant reactance occurs among the two or more parallel-connected voltage coils since their voltage goes up while their current goes down. I suspect this capacitant reactance is due to the voltage coils possessing a significant level of capacitance among their windings.

We consider this to be standard behavior on either side of a step-up, or step-down, transformer. But this circuit exclusively possesses neither a step-up transformer, nor does it exclusively possess a step-down transformer, since we're not dealing with energy transfer moving in merely one direction from a source to a load. Instead, reactance is constantly being fed back and forth in both directions in a condition of the recycling of lossless power.

The weak ten percent coupling between the transfer coil and the voltage coils seems to favor the relationships described above along with the overall reversal of voltage polarity also contributing to this situation.

The reversal of voltage polarity seems to occur at the bottom of this circuit at the pair of capacitors being force-fed D/C current without any opportunity to discharge their buildup of voltage. What I think happens is that these capacitors (and the weak capacitance of the transformer sandwiched between them) retaliates by discharging a current-free signal of voltage whenever the four diodes are forcing them to accept voltage when they've already become saturated. This currentless discharge of a mere signal of voltage is in direct opposition to the phase relation of voltage being force-fed into them and at a slightly higher frequency. This is what instigates a rise of frequency in this circuit while the five coils at the top of the circuit amplify this "stressed" condition giving an eventual abundance of reactance stretching towards infinity if not cutoff by the periodic firing of the spark gap.

Whenever I zoom in for a closer look at the waveforms, I see a triangular wave appearing riding piggy back on top of the sine wave input. This triangular wave grows in amplitude -- and in frequency as well -- quickly dwarfing the amplitude of its carrier sine wave. So, instead of a wavy sine wave whose peaks and troughs are stable, we get a smooth hyperbolic arch bending upwards towards infinity as the oscilloscope tracing of the simulator stands further and further away from this in order to "take it all in".

At some point, the spark gap on the left kicks in acting as a resistive load for a split second putting back together the fragmented reactant waves of current and voltage which this circuit has been separating by 180 degrees of phase relation amounting to a one-half cycle of an A/C cycle of separation. This momentary departure from reactance serves to collapse this hyperbolic surge to a very low value of nano- or femto- units of measurement of power only to be superseded by another rising surge quickly escalating towards infinity. And this cycle of repetitive surges and collapses occurs 6k times a second in this particular circuit. Every variation of this circuit modifies the frequency of this cyclic occurrence to one degree or another.

So, for a 20% to 30% duty cycle of D/C output, I don't think a D/C to A/C sine wave inverter would mind too much, do you? Since it's accustomed to outputting a sine wave of 60 Hz while mine is hiccuping a D/C input at a rate of 100 times faster! This is what would happen if I position an actual build of this circuit simulation behind the battery pack of an EV sending this through, or partly in parallel across, a pack of dead batteries and then onward to the car's sine wave inverter before it reaches the twin A/C motors of a RAV4EV from 2002.
 
This seems to be a "Bunch of Malarkey", but there are several obvious fallacies that have apparently not been taken into consideration or accurately simulated. In the separate text you say that the inductor will be 200 pounds per HP, so with 27.5 kW (37 HP) output, it would be 7400 pounds - obviously impractical even if it had any chance of working.

You specify 40 AWG for a 100 mH coil, 30 AWG for a 20 mH coil, and 70 AWG for a 1 uH coil. These windings will have significant resistance, and maximum RMS current well below 1 ampere. You show an output of 346 volts at 191 amps P-P, which is 23.3 kW, assuming a sine wave.

You show several diodes, which apparently must handle currents of over 100 amps, and require PIV of 400 volts or more. That eliminates Schottky devices, and silicon diodes will have a forward voltage drop of at least 1 volt at 100 amps, which is 100 watts per device. Has this been included in the simulation?

Do a simulation which accounts for these real-world parameters, including the material you will use for the magnetic core (hysteresis and saturation losses), and show your results. Better yet, build it, test it, and show your results.
 
Basically we visualize things in the universe as two sorts those that move (have motion) and those that don't. In physics we know already that this is in error; there isn't anything in the universe, anywhere, that is motionless. At least it is moving through time, which is still a special kind of motion. Also, we know that everything seems to be made up of much finer things, and these finer things are always in motion - often very violent motion. So what we observe as a passive thing - sitting still spatially, so to speak - is made up of subthings in violent motion spatially. And the whole system that is not moving spatially is still moving in time. However, we don't see "time" but just space; therefore we see the thing as "motionless." However, the "motionless" thing we look at is rather like a fixed whirlpool in a swiftly flowing river the whirlpool seems to us to stay "fixed" and motionless, but internally its parts (the flowing water) are in constant motion. Another example is a container of gas under pressure - such as the air tank at the service station. The tank and "the air as a whole spatial volume" isn't going anywhere, and we see them as "motionless." But inside the gas its molecules are in violent motion, undergoing collisions, etc. Indeed, inside the walls of the tank, the molecules and atoms are in vibrational back-and-forth motion in a spatial lattice. The point is, physically "motion" and "motionless" only apply to the external characteristics of the object to which we pin the label. So it represents only an overall characteristic of the object, and does not completely describe it. In a sense "motionless" is filled with motion, and all is motion. In vector analysis, a scalar quantity is considered to be a quantity that has magnitude or size, but no motion. An example is pressure; the pressure of a gas has a certain value of so many pounds per square inch, and we can measure it, but the notion of pressure does not involve the notion of movement of the gas through space. Therefore pressure is a scalar quantity, and it's a gross, external quantity since it's a scalar. Note, however, the dramatic difference here between the physics of the situation and mathematics of the situation. In mathematics, when you say something is a scalar, you're just speaking of a number, without having a direction attached to it. And mathematically, that's all there is to it; the number doesn't have an internal structure, it doesn't have internal motion, etc. It just has magnitude - and, of course, location, which may be attachment to an object. However, physically, when we say something has pressure or a scalar value, that is not all there is to it. That particular aspect of the object or system may be scalar, but internally the thing it's labeling can still be decomposed into subsystems or particles or small things in violent motion. That is, in physics the scalar quantity can mathematically be further decomposed into an ensemble of vector quantities. Since these parts are rushing around in all directions but the whole is not translating through space, then obviously the sum of all those fractional motions must be zero. Scalar pressure, for example, can be decomposed into a myriad of opposing force vectors per unit area. Mathematically, a vector is an entity that not only may have magnitude or size, but is translating through space. In physics, we apply the vector concept to something that is moving, and/or to position. However, when we think further, that "something" is made of smaller things, which also are in violent motion, and these smaller things may be swarming all over the place with differing velocities - or even flowing at high speed in and out of the moving "system- thing" represented by the vector. So even here, the vector thing is a special case of an ensemble of smaller things. In the physical world, in anything - even inside a single point - there are always infolded vector things in violent motion. We may say that these interior critters are "hyperspatial" or "infolded" or "virtual" or "hidden". But they're real and they're inside the point, as seen by the external observer. The point is this Everything seen externally is a plenum internally. In the real physical world, both a thing that's externally motionless (a scalar) and a thing (a vector) that's externally translating through space, are special cases of a system whose internal parts are always in motion. If the sum of the internal motions is zero, the external object seems to be sitting still and motionless to us (though it's still moving through time with - usually - uniform motion). We describe that internal characteristic of the system as a vector zero resultant system. Externally we may also characterize it as a scalar, because it still possesses attributes that have magnitude. On the other hand, if the sum of the internal motions is not zero, but is a motion in a certain spatial direction, then to us the external object seems to be moving along in space. That is, it is translating spatially. Externally it has both magnitude and direction, so we view it as a vector. To label a thing as only a vector is to look only at its external attributes. To label a thing as only a scalar is to look only at its external attributes. To look at its internal attributes, it must be recognized as a scalar and a vector at the same time. That is, the scalar attributes must be recognized to be composed on internal vectors. Summing that up, physically a scalar thing is a thing that (1) is a vector in time, which is hidden from direct observation, (2) externally is just a magnitude spatially, and (3) has an internal spatial vector structure, and therefore a hyperspatial or virtual-state vector structure. A vector is a thing in motion in a dimension (through a frame), whether in space, hyperspace, or time. Rigorously it is not possible to exclusively separate the notions of vector and scalar, because any scalar, to persist, is automatically a vector in time. These concepts or vector and scalar are normally not nearly so well clarified in standard physics and mathematics texts, unfortunately. Usually discussions of this type are reserved to obscure papers in foundations of mathematics. It may surprise the casual student, for example, that the notions of line, point, space, zero, length, dimension, frame, time, and observer have no truly acceptable definitions. Neither do the notions of force, mass, field, potential, etc. In fact, mathematics no longer attempts to explain how a line can be made of points. Instead, in foundations, it is simply stated as three postulations thusly "There is a class of entities called points. There is another class of entities called lines. Lines are composed of points." From a physics viewpoint, one of the big problems with the present vector mathematics - which is well-known not to be a complete system of mathematics in the first place - is that the presence of a bunch of vectors that sum to zero is just treated as a zero or absence of any vectors at all. That is, the absence of any internal vectors at all is made synonymous to the presence of a bunch of internal vectors that are fighting each other to a draw. What this does is throw away the internal energy and internal ordered structuring of the medium - specifically, the energy of all the vector fighters that is continually going on inside the local medium - inside spacetime itself. Physically that's quite wrong, and one is throwing away exactly half the energy of the situation. There is a very real physical difference between a system of real vectors that fight to a draw and so do not translate en masse, and the absence of any vectors and vector-fighting at all. The difference is composed of stress and its internal vector patterns - the internal energetic engines in local spacetime and local rest mass - in short, the energy trapped in the local medium. Where electrical students meet this hidden problem, of course, is in the fact that the four vector Heaviside equations of EM are not closed. One always has to assume that one or more of the "remaining potentials" is zero - that is, absent. So right there all the texts and professors reduce even Heaviside's equations to a special case of the absence of any "left-over and hanging around" scalar potentials. As an example, that little assumption gets rid of any possibility of the Aharonov-Bohm effect, where potentials alone can interfere, even in the absence of EM force fields, and produce real force effects in charged particle systems. That is, the sole agent of the interference of scalar potentials can induce EM changes, according to the experimentally proven Aharonov-Bohm effect, even in the total absence of EM force fields. Since 1959, it has been known in quantum mechanics that the EM force fields are not primary agents at all. We know that classical EM theory is completely wrong on this. QM shows that it's the potentials that are primary, not the force fields. In fact, it can be shown that the E-field and B-field do not exist as such in vacuum; only the potential for the E-field and the B-field exist in vacuum. Feynman pointed that out, but nearly all of his modern cohorts seem not to have recognized that fact. Indeed, vacuum is just a conglomerate of potentials, nothing more, nothing less. And if you just look carefully at the definitions of force and E-field, you see immediately that (1) force (nonrelativistic case) consists of mass times acceleration. Therefore a force consists of an accelerated mass. An electric force consists of an accelerated charged mass, normalized for a unit. But it really isn't treated that way in the EM theory, where it continues to erroneously considered to exist as a force field in the vacuum. At least you've got to use the adjectives "virtual" and "observable" to differentiate vacuum things from material things. One can correctly state that a virtual electric force field exists in vacuum, comprised of accelerating virtual masses, but not an observable force field. The observable electric force field requires, and consists of, accelerated observable charged particles. And the only place observable particles exist are in a physical medium, of a collection of one or more observable particles in space. So it doesn't take any special powers of thought to directly show that there are some very serious, fundamental things wrong with the present foundations of EM theory. There are lots of other flaws in EM, such as the fall of the Lorentz force law in modern railgun experiments. The law has always been false, but is a sufficient approximation if the energy regime is not too high. Peter Graneau's work is fundamental in this respect. To sum this up another way The present vector analysis (as applied to electromagnetics) discards the internal, trapped EM energy of local spacetime. Now if the internal trapped energy of spacetime varies from place to place, that is called a curved spacetime, relativistically speaking. And when a spacetime is curved, there is communication of energy between the internal, infolded, virtual EM energy state and the external, translating, observable EM energy state. Curved one way, the local spacetime is a sink, with external energy pouring into it continually, and disappearing from observation of the external state. Curved the other way, the local spacetime is a source, with energy pouring out of it continually, and appearing in observation in the external state. What the present vector system of EM does, therefore, is throw out the ability to use the very strong EM force as an agent to curve local spacetime. The very mathematics itself, a priori, assumes and guarantees a locally flat spacetime. And in an uncurved region of spacetime, for example, you are never going to make an over-unity machine - a so-called "free energy" machine that will give you more energy out than you put in - because the application of the vector theory a priori guarantees the elimination of any hidden sources from the local spacetime (ST) medium. If you're going to tap the trapped vacuum energy, and make a so-called "free energy" device, you're going to have to curve the local spacetime. That is the only way to produce a local energy source in the vacuum, from which a current issues. Notice that, when we put a paddlewheel in a river, we produce a free energy device because we tap some of the energy in the flow. But we tap a current, we do not just tap a potential per se. The entire secret of tapping vacuum energy, to build a free energy device, is to produce a current in the local vacuum potential that is self- sustained, and then tap that current. So the present EM theory throws away exactly half of the energetics of the situation involved. From time to time yet another physicist discovers that astonishing fact, and publishes a paper on it to point it out. Nobody does anything about it, however, because no one has the foggiest notion of what to do. So everybody just lets it pass and nothing is changed. Suppose, for example, you connect a voltmeter across a wall circuit to measure the voltage. The meter needle moves against a spring with a force, as a result of the detection made by the voltmeter. The actual detection is an interaction inside the meter's probe which induces conduction electrons to move. We read the needle movement that resulted from those conduction electrons, and we infer so much voltage. The important point is this The voltmeter is measuring the energetics of its own internal change; it is not at all measuring anything external. All instruments measure only their own internal change. We infer the external thing that interacted with the instrument to cause the internal change. We do not measure the external entity directly, but only the results of its interaction inside the probe and meter. And even then, we measure only the external, spatial-translation energy of the instrumental interaction; we do not measure or account for the internal energy of that interaction. To state it more precisely The needle moved because conduction electrons accelerated away from the instrument's interaction area. This current flowed into a coil and produced a force on the needle movement, rotating the needle against a spring. At the same time, another current - a time- reversed, phase-conjugate current - was induced in the atomic nuclei of the atoms in the interaction area. This "inner current" flowed Whittaker- wise through the atomic nuclei of the instrument, producing an equal an opposite force. [This is the mechanism that produces Newton's third law in the first place, as suspected by Feynman.] So the entire body mass of the meter recoiled slightly from an equal and opposite force, which we just loosely refer to - and recognize as - Newton's reaction force. It's there, it's real, but we completely neglect it in our electrical measurements Usually we don't think it had anything to do with the external entity that interacted with the voltmeter. But it was a product of the same interaction of that external "something" within the meter. It's equal and opposite to what generated our electrical measurement. So exactly as much energy was produced in the "reaction force" energetics as was produced in the "external meter needle force" part of the interaction. We only measured and accounted for half the true energy of the interaction, or else you've got to discard Newton's third law. It follows that what actually entered into the interaction was a system of oppositely paired forces - a stress field, which is a scalar potential. This, of course, is consistent with our observation that vacuum itself is pure potential. As such, it consists of partial potentials of various kinds - it's highly charged, and the ambient vacuum scalar potential has very high magnitude. Remember that this ambient vacuum stress (potential) can be decomposed into sets of bidirectional forces. In our EM interaction, one- half of the stress pair - the half that is the normal photon-generated EM forcewas utilized to move the conduction electrons, involving primarily the electron shells of the atoms imbedded in the vacuum potential. The other half of the stress-pair interacted with, and moved, the atomic nucleus, causing it to recoil. The recoil of the nucleus was slight, because it is very, very much heavier than the accelerated outer electron. To sum it up All detection is actually binary, it's not singular at all. When we detect photons or EM waves, we normally account for only the externalized translation part of the energetics of the interaction. We miss or neglect the internalized translation part, and we miss or neglect precisely as much internal energy as we account for externally. Again, I'm not the first one to point this out by any means.
 
2,481 - 2,491 of 2,491 Posts