Discussion and questions related to the course Understanding AFR
So I'm curious as to what an experienced tuner would say the average time delay is from commanding a certain fuel at a given rpm vs when the wideband picks up the air/fuel ratio and can send the information to the ecu.
For example, if I take a data log of a WOT run on a dyno and I see a slightly lean spike in the log at 6000 RPM. Would the change need to be made slightly earlier, maybe at 5900 RPM?
I'm aware there are a lot of variables that can affect this such as flow of exhaust gasses, wideband placement downstream, speed of sensor itself, etc. What would you say the average time difference of this is to improve a fueling table problem?
This is primarily a function of mass flow rate, which at wide open throttle is most influenced by engine speed. (i.e. it takes less than half the time at 6000 RPM as 3000 RPM). So if the wideband is in the collectors, then it's on the order of a couple of engine cycles (say 60 milliseconds at 3000 RPM, and 25-30 ms at 6000 RPM). The additional calculation / communications delay is probably on the order of a 2 - 50 ms depending on sample rate).
You can do a test for an individual car, by changing one fuel cell entry to be 5-10% rich, then doing the pull, and noting the RPM at which the rich spike occurs. The difference in RPM between the cell, and the measurement will be your delay.
I generally run 500 RPM/sec ramp rate with WOT runs on my dyno, and with my O2 sensors located at the collectors, would expect the logged lambda is 25-40 RPM (ie. less than .1 second) behind at 6000 RPM.
That makes perfect sense and helps a lot.
Thanks for the advice!