DIY Electric Car Forums banner
1 - 13 of 96 Posts

· Registered
Joined
·
2,162 Posts
fwiw, I did find this:
http://www.sciencedirect.com/science/article/pii/S0378775316300015

which implies for lack of cell current monitoring, it is better to match levels by capacity and impedance for longevity, in simulation. The more cells in parallel and the less you push them to their current and capacity limits, the less critical.

And they state that the capacity changes and impedance changes aren't uniform even if they all start out the same.

soo, you pays your money, you takes your chances :)

6. Conclusions and future work
The primary results from the experimental and simulation work presented highlights that cells with different impedances and capacities connected in parallel do not behave in a uniform manner and can experience significantly different currents. The distribution of cell current is shown to be a complex function of impedance, including the high frequency aspects typically ignored for single cell models, and the difference in SOC between cells. As a conventional BMS design does not monitor current within parallel units, some cells may be taken above their intended operating current, or be aged more quickly due to increased charge throughput and ohmic heat generation – shortening the lifespan of the overall battery pack.
but... (SOH=State of Health)
This implies that they should age slower than the other cells, and so it is expected that gradually the SOHs of the cells within the parallel unit will converge.
 

· Registered
Joined
·
2,162 Posts
the thing not being discussed is that internal resistance is more important than capacity when it comes to determining how the current will be shared among cells, and the two are not directly related. They both are subject to change over the life of the cell, in less than predictable fashion too. But it seems to affect the lifespan of the healthier cells, if the pack is pushed hard and there are few in parallel, and if the pack is pushed beyond 80/20, regularly.


 

· Registered
Joined
·
2,162 Posts
fwiw, grouping all the high resistance cells together creates other problems, i.e. heat concentration.

I think "reasonable" safety factors on state of charge/discharge and max current are in order, depending how much variance you see in your cells, regardless if you make each level the same capacity/resistance or make each cell within a level the same capacity/resistance.

My gut is that if they are all within %10 that distributed is better. And charge to %80 and discharge to %20 and limit to 3c. But that is a swag, and your use case may differ.

plus a sensitive bms should tell you when something has changed at a level.
 

· Registered
Joined
·
2,162 Posts
My gut is that if they are all within %10 that distributed is better.
but then again, in this simple example the lowest resistance "cell" sees 37 amps and the highest resistance sees 30.3 amps. Even though the distributed pack has slightly less resistance (9.933 ohms instead of 10), just being +-10% resistance can mean a large swing in the current demands per "cell" (~%20, no surprise there)

Group by resistance at least, a quick test.
 

Attachments

· Registered
Joined
·
2,162 Posts
conceptually, if it is an issue, you can modify what major said, and do a voltage measurement at current A, then a voltage measurement at current B, then another voltage measurement at current A and use the average A voltage. And they should be fairly brief measurements anyway.
 

· Registered
Joined
·
2,162 Posts
fyi folks here design BMS's for fun so don't expect much production value.

caveats:

balancing: don't bother. More likely to hurt than help (except maybe oem units). If you can install a bms, you can manually balance, and it is rarely if ever needed. Just need an idiot light to tell you something is out of balance really. Ignoring something going out of balance and just letting it balance automatically is ignoring cell specific aging information anyway.

units that cause imbalance: This is just dumb.

diy packs occasionally are opened: deliberately or accidentally, if you break the circuit between two cells (or a cell fails open), whatever is bridging the gap will be exposed to high reverse voltage, and if it is level shifted signaled or multi-cell vs isolated it can cascade.

power draw: keep it very tiny or it will kill more batteries than you can count. having zero draw when not charging or running is ideal. A lot of batteries have died just sitting there. powering the bms via isolated dc converters is one, albeit expensive, method. op amps can reduce the voltage sensing load. Or an opto-controlled mosfet on the divider/cpu power, etc. etc.

accuracy: you probably still need a calibration method/rig after assembly. could be software (eprom) or some solder pads on unused logic pins or (gag) trim pots.

network speed, reliability, noise immunity, vs ease of installation, cost/etc: Depending what you are hoping to accomplish, having all the nodes measure the cells at exactly the same time (combined with a concurrent overall current measurement) might be important. Or it might not be. It is often acceptable to trigger a concurrent measurement with a broadcast, then go back and collect the results individually.

Bus type networks have addressing headaches though, i.e each node needs to be addressable, and often that is more solder pads for the end user or firmware/eprom updates to give it an address. And to give more meaningful messages to the user the address has to relate to the battery cell in question.

plus you should consider stuff like twisted pairs (power, signal/ground or differential, constant current (i.e. digital 4-20ma), possible clock lines, sr lines, etc. etc.).

for simple installation, I tend towards a uart token-ring style (preferably with hardware majority vote and an xtal). As it just needs a twisted pair between bms nodes, and can handle automatic addressing easily on initialization. If you are brave you can level shift between nodes, but otherwise it is just an opto per interconnection. It doesn't do concurrent readings, but it works. And you can add low power/sleep modes that wake on signal (just keep pinging the first node from the bms controller till it hears back from the last node), lots of protocol fun.

And having said and learned all that, this is what I use on 48v systems :) (monitor it like you would a gas gauge as it also gives you the pack voltage if you multiply the reading by 2, if the numbers get too different then time to investigate)
 

· Registered
Joined
·
2,162 Posts
I'm now looking at trying to make the simplest possible single cell stackable BMS.
"attiny24"
why do you need a divider though, and a voltage reference in that case?

fwiw, this is a pretty deep rabbit hole though, and you should know how to program to work with microcontrollers effectively. I did sort out an assembly routine for the tiny44 once (same as 24), no hardware uart made assembly the only choice for timing (literally counting instructions). It had majority vote, and I reduced the frame size (below what a typical hardware uart would allow) so it would be good without a crystal oscillator, in daisy chain (tx of cell 0 goes to rx of cell 1, etc). Auto addressing (command packet had counter that each node stored and incremented and forwarded). chain functions, like just give me the max voltage in the string, or the min voltage, or the sum, plus individually addressable voltages, with room for a temperature reading as well (thermistor, with max and min chain commands and apa), plus an led to help troubleshooting (i.e. follow the chain till the led's stop lighting for network problems, or look for the blinking led to identify the battery in question) and sleep/wake modes.

It wasn't one cpu per cell though, but easily modifiable for such, it is literally just the cpu and an opto for bare minimum at that point since the voltage range is acceptable. Though you still may need a calibration step after assembly (and it might need to measure vcc and internal temp to be accurate).

so while "simplest" can be optimized for hardware, it doesn't make the software (the part you are avoiding :) ) simpler. Siloing hardware from software in microcontrollers while trying to optimize is a bad mix. If you can wash your hands at the programming side, then by the same token the programmer can wash their hands at the hardware side, and just make whatever you give them work, more or less, suboptimally.

Though I'm concerned that you are ignoring very relevant comments regarding methodology, as if you see fortune in this direction, and that is what kills batteries and floods the market with junk. I think you missed that ship by about 8 years though.
 

· Registered
Joined
·
2,162 Posts
The internal bandgap reference is only good for +/-10% in ideal conditions, clearly a better reference is needed.
Not if you know that there is also an internal temperature sensor and calibrate it (in flash/eprom) and also account for vcc changes (and have that calibrated). A lot of things can be made to be software problems, that is why knowing programming (and data sheets) is important for "simplest" hardware with microcontrollers. And even with an external reference you need to calibrate the ADC as part of the assembly/programming step, and account for changes in vcc and temp probably too if you are trying for accuracy, only now the temperature of the reference isn't as tightly coupled to the internal temp sensor...

Did you know the attiny24 also has differential adc and an op amp? (which, sigh, also needs calibrating, and very careful hardware considerations)
 

· Registered
Joined
·
2,162 Posts
ah, RC enthusiast! (I like the sailplanes)

Just FYI a lot of the research shows significant degradation in lithium batteries if charged fully to 4.2, i.e. in my leaf I used the timers to set it to %80 charge and keep it out of the red, as has been discussed on the mynissanleaf forum. Also sitting at full charge is bad, and if it is hot then it gets worse, so the low voltage you reported at receiving is actually a reasonable long term storage voltage (%30 SOC IIRC). I charge my leaf cells to ~4.1v. Over charging and discharging reduces cell life exponentially so that you get less total energy out of the cell over its lifespan. I know the RC guys tend to max it out and just replace the pack more often, and cell phones want to maximize their battery claims, but on car scale that gets expensive.

This is part of the battery management domain that you are entering.

I think you missed my point on the cpu though, the attiny24/44 can get reproduce-able results with minimal external hardware, and is cost effective even at the 1 cpu per cell level, but it is a large initial software cost, plus building a test/calibration rig that you validate each unit produced. And you are likely to run into the same problems with any microcontroller, unless you pay a lot for it. The devil is in the details as they say.

So for a "cheap" personal pack, it doesn't make a lot of sense to get fancy with the bms hardware either. If it costs 3x the cost of the battery, then just get 4x the battery instead and simplify the monitoring.

also there are concerns with the stated need for balancing, I think there may be errors in your testing procedure, and you are comparing 1p to 10p). But you are claiming without really backing it up that there is a need, and you have the solution. It is rather circular. Like inventing a bms with built-in imbalance then using that as the reason for adding balancing. Then saying you don't want to "crack open" the pack when you know each cell will have a lead routed externally anyway...

There are a lot of holes in the reasoning here, and in the implementation, and this is for an extra 5 miles range on a leaf? Do you have any idea how to interface with the leaf battery?

I guess the problem is you are saying things as absolutes, but you haven't done all your homework yet or properly backed up those assertions, despite being told to the contrary from numerous experienced folks.
 

· Registered
Joined
·
2,162 Posts
Is a FET needed for level shifting communication?
FYI an npn-pnp pair can level shift as well, the problem is the difference in potential where the cpu are referenced and the protection diodes on the io pins will clamp high or low (thus protecting).

fwiw there is a lot of bms discussion and a fair amount of my own blather here:
http://www.diyelectriccar.com/forums/showthread.php/bms-design-guidelines-82646.html



though I should maket an update to the resistor network post, as it has a problem. And the current shifting version after this diagram should have better noise immunity.
 
1 - 13 of 96 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top