This is regarding the Sparkfun Jetson Nano kit. https://www.sparkfun.com/products/16308
I was so excited you guys had the 4gb jetson nano in stock even if it meant paying double as it was only available in the kit form. The issue I’m having is that the jetson nano itself seems to not function correctly unless the Dynamic Voltage and Frequency Scaling (DVFS) is disabled. When it is enabled (by default) the calculated inferencing results are not accurate/correct. Only when I disable DVFS using jetson_clocks, will my board make accurate calculations. I’m speculating that perhaps the dynamic voltage scaling is too aggressive for the copy of the jetson chip included in my kit. Perhaps the combined voltage/frequency combination its trying to run at when doing the scaling is just slightly too aggressive to the point where it leads to incorrect calculations, but not so severe that it leads to a full blown kernel panic or hard fault. That is just pure speculation on my part though, and I would love any insight or troubleshooting tips to get this thing functioning correctly. I’d much rather it be some kind of issue with something other than the jetson since they seem to be so hard to get a hold of, but after all the hours wasted troubleshooting this I’ve been finding it harder and harder to come up with what else could be at fault beside the jetson module.
I posted an issue on Nvidia’s getting started with jetson inference github, but was told “I have not seen this behavior before, or been able to reproduce it”. See https://github.com/dusty-nv/jetson-infe … ssues/1240
I have tried multiple times with wiping the sd card and doing a fresh install, skipping updates or installing all current updates, redownloading the docker images, and even the entire repo, from the getting started with jetson repo etc.
thinking it could possible be a power supply issue even though I’m using the power supply that was included with the kit I just did a quick and dirty hack to temporarily monitor the voltage and print out the voltage any time it changed to a csv file while I was running the inferencing but it all looked like perfectly reasonable voltages running on the 5v barrel connector.
I ran the command procided below in another bash window while I was doing the inferencing that was producing the incorrect calculations to see if the voltage was falling too low out of spec as I had read that could cause issues but it all looked fine. I’m sure there’s a cleaner more elegant way to do it, but at least it was fast to write up and only outputs new voltage values when it has changed from the previously read value. I found that a min of 4984 mv, max of 5112 mv during a run with the inference network already cached run at 10W mode with DVFS enabled which is the default setting, and of course this produced incorrect calculations. After using jetson_clocks to disable DVFS there was a min of 4976 mv, max of 5080 mv during the same type of run with the same image, but now that DVFS was off the calculations were actually accurate and correct. All the samples and test runs were unmodified from the nvidia github repo, and I was using the official sd card image for the OS. All the times I erased the SD card image to try something else I always started with the official nVidia one, and even verified the image was written to the SD card accurately after the writing process was finished via checksum. Problem was tested running both headless and connected to a display with no difference in accuracy. It was always right when DVFS was disabled, and nearly always wrong when it wasn’t disabled unless the clock speeds/voltage had already been push up high from doing something else then I might get lucky and have it come up with an accurate result.
echo Voltage,Time > ~/voltage.csv; prev_v=0; while true; do
sleep 0.05;
milliV=$(cat /sys/bus/i2c/drivers/ina3221x/6-0040/iio:device0/in_voltage0_input);
if [ $prev_v != $milliV ]; then
echo -n $milliV "mv,";
date "+%H:%M:%S.%N"
prev_v=$milliV
fi
done | tee -a ~/voltage.csv
To be clear the issue is when DVFS is enabled instead of properly classifying the sample images, it fails to classify it at all or produces wildly incorrect classifications that make no sense. As soon as DVFS is disabled it will produce the right classification for the image every single time such as class 0950 - 0.966797 (orange) if I follow one of the examples in the getting started with jetson inferencing guide at https://github.com/dusty-nv/jetson-infe … -on-jetson
It took quite a bit of troubleshooting just to get to the point where I realized that DVFS seemed to be causing the issue. I’m hoping someone can suggest anything to get my kit functioning properly. Thank you so much for your time and assistance.