# LiFePo4 or NMC discharge heatup calculation?

377 11
I would like to determine the heat generated by a battery under load.
You can calculate heat generated over time, if you know the internal resistance.
Internal resistance is often specified (in a range). However that doesn't tell the whole story.
Because I am sure the internal resistance in discharge varies over time, because of different power output (higher and lower C rating)
as well as temperature.
What I would like to know is how I determine that. And that in a theoretical way, because I don't have the batteries to do any testing.
This is also because I still need to make the choice which batteries I use.

So for example, we take a typical LiFePo4 cell of around 300Ah (could be EVE or CATL). It is at ambient temperature of 10 degrees Celcius.
Then put a 2C load on it, so 600A. Lets assume the internal resistance is 0.25mOhm because that is often listed.
P = I^2 * R so 600A^2 * 0.00025Ohm = 90W dissipation in the cell right?

Lets say the cell weighs 5.5 kilogram.
And specific heat of LiFePo4 is listed at 1130 J/kg.K (but this number varies over temperature right?)

For 1 minute load of 600A means 90W for 60 seconds is 1.5 Watthour = 5400 Joules

Then heat capacity of LiFePo4 at 1130 J/kg.K
5.5 kg * 1130 J/kg.K means that it needs 6215 Joules for the cell to heat up 1 degree. Right?
So at 60 seconds it will have almost raised 1 degree, actually 0.87 degrees to be exact.

Does the above sound about right or did I make big errors?
I expect the internal resistance to change based on temperature and amount of current drawn.
The same goes for the specific heat, but it might not have a huge impact on the calculations.
And it seems that internal resistance will also become larger as the state of charge drops.

Of course, it leaves out any heat up of busbars and other connections, or inverter/motor etc. It is just about the batteries.

Background: I am checking if I should use LiFePo4, which I could keep at atleast 10 degrees Celcius (which is better when you draw some current)
in an insulated box. Of course, under load (2C will be maximum motor power draw) it will heat up, and due to the isolation not so easy to lose that heat.
However, if the rise in heat is acceptable, than I think it would be a great setup. Skipping a battery cooling system for simplicity.
But in general I am trying to understand the theory, to later verify it in practical test.

Edit: For some NMC EV cells you can find a bit more info, like discharge internal resistance for several conditions.
See the example below of a Samsung SDI 94Ah cell (as used in BMW i3).
At 94Ah you need three in parallel to be comparable in energy density (ok NMC has higher voltage but just for simplicity)
So internal resistance of 0.7mOhm becomes 0.23mOhm and thus very comparable to LFP heat up profiles?
Only specific heat is a bit less per C/kg.K so they heat up slightly faster?
1 - 2 of 2 Posts

#### serious_sam

· Registered
Joined
·
18 Posts
Stop assuming stuff! I am still going to use a BMS in a feedback control way as always, and yes it will go turtle when batteries are hot.
I don't want to do feedforward control, I never wrote that.

I am just trying to see if I can get a rough idea of the amount of energy getting lost in the batteries heating up.
My goal is to get an idea of what it would be like if I don't utilize any active cooling on the pack.
It will be a rough idea, there will be a lot of parameters that change the outcome in reality. However, it will still give me a basic idea. For example if it will be a lot in turtle mode when driving it in summer, I might need to go for a different strategy.
Just ignore them.

IMO, in the context of what you're trying to accomplish, I think you're on the right track with your approach.

Some things I would take into consideration:
• Joule heating (P=I^2R) is the predominant heating factor.
• There is also reaction (reversible entropy) heating (which in some cases can be endothermic). This is a smaller contributor than the joule heating, and at higher discharge rates you might be able to ignore this term in your calculations. Either way, you should look into it, and make a more informed judgement.
• I think that cooling rate will be the greatest source for error.
• One option is to go to the extreme and assume no heat transfer out, which will give you a conservative result. Which is great if the result is that your temperature rise is within the cell limits, because it can only get better with real-world cooling. But if the calculated temperature rise is too high, it doesn't really tell you much, except that you do need cooling.

#### serious_sam

· Registered
Joined
·
18 Posts
• One option is to go to the extreme and assume no heat transfer out, which will give you a conservative result. Which is great if the result is that your temperature rise is within the cell limits, because it can only get better with real-world cooling. But if the calculated temperature rise is too high, it doesn't really tell you much, except that you do need cooling.
If you ignore cooling, you can do simplify the calculation to the cell level.

Temperature rise is proportional to discharge rate (blue line).

In reality I would expect the temperature rise to be lower due to cooling (something resembling the red line).

1 - 2 of 2 Posts