I just received an Edge development board (https://www.sparkfun.com/products/15170) DEV-15170 and the hookup guide (https://learn.sparkfun.com/tutorials/sp … okup-guide) says that the default, built-in firmware requires only power and will respond to spoken YES and NO.
My board however will not do anything besides rapidly flash the blue LED when power is applied. The hookup guide has no further information on function checking or troubleshooting this situation.
I managed to find this https://codelabs.developers.google.com/ … sorflow/#1 which gives a bit more info. Summary:
- Apply power via serial adapter or battery, board will flash blue LED and begin listening
- "YES" will light yellow LED. "NO" will light red LED. Unknown speech = green LED, no speech detected = no LEDs lit.
- If the board doesn't respond, a variety of steps should be taken such as:
- Double check battery polarity (blue LED will flash if power is OK)
- Reduce background noise
- Hold board about 10" away from mouth
- Repeat YES in quick succession. e.g. try saying "yes yes yes"
I have taken these steps (even tried different CR2032 cells) and can’t get the board to respond in any way to my speech tests. I speak NO and YES and the suggested YES YES YES clearly in the ways suggested, but the only thing the board will do is flash the blue LED quickly when power is applied.
Can anyone give me any tips on function checks or other troubleshooting steps for the basic, built-in firmware? I had no immediate plans for development work with the board, but I was going to play with its simple ability to detect YES and NO as-is.
I’ve experienced the same thing. I didn’t expect high recognition rates, but I expected that I would be able to get something to respond sometimes. So far the blue LED blinks and nothing else happens.
So I built the tensorflow_demo example, which sounds like it should do the same thing, but would be built using the Sparkfun BSP updates. Unfortunately it also blinks and never recognizes anything. So I searched through the code to find the place where the LEDs would be toggled in response to a voice command, but wasn’t able to find where that happens.
Any clues? I’m happy to troubleshoot, but a pointer to the location in the code where the Tensorflow evaluation happens and a pointer to where the LEDs get toggled would be helpful.
Hi there,
Getting a match can sometimes be tricky. I’m not familiar enough with the whole process to say exactly why. Is it the training accuracy or content? Do you need to pretend to be Pete Warden when you say “yes?” Jokes aside - what I can do is point you to the exact part in code where the decision to turn on an LED is made. Follow this link to [‘command_responder.cc’ in the TensorFlow Micro Speech example source. There the GPIO pins are initialised and usually set to zero. When a match is found the corresponding LED is set high until the next processing event (it seems to me). Hope that helps out!](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_speech/sparkfun_edge/command_responder.cc)
Weeeeeeeelllllllllllllll
The thing is that I bought the board to play with the baked-in demo YES and NO recognition, but that part doesn’t seem to work at all. It would be one thing if it worked poorly but it doesn’t seem to do anything at all.
I know it’s a dev board and I’m not in fact doing the dev part, but are my expectations of the demo features actually unreasonable here?
helle, i have the same issue
i have re build from git and still not working
but all the tests performed with make test are ok
i have added in the sources some logs and the bugs seems to be in the source recognize that badly filter the output from the network tensorflow
it works better when i remove some piece of code from this source but i don 't understand fully this source
please help us
thanks
hi sparkfun team,
your help will be very much appreciated
thanks
Hi robotedh, having added logs and removed a part of the TensorFlow code yourself your knowledge of that source code is probably greater than mine. SparkFun did not write the TensorFlow code, so this is probably a question that is better suited for the Google folk. If you have questions about the Edge board hardware we’d be better able to help in that arena.
hi many thanks for this precision
so I will continue debuging …
Hello,
Following my debuging session, it appears that the issue is due to the average computed on the TF results.
The average is computed on about 10 TF consecutive outputs but when I say ‘yes’, only one TF output detects ‘yes’ with a good score > threshold (200), the other TF outputs computed before and after have a poor score for ‘Yes’. So the result of the average is smaller than 200 and at the end the ‘yes’ is not regonized.
So I have removed the average calculation and it works better.
I have opened the corresponding issue on GIT tensorflow to get the confirmation of my analyse
Issue recognize_commands micro_speech demo #28516
Below the changes is the source to remove quicly the average calculation:
tensorflow/lite/experimental/micro/examples/micro_speech/recognize_commands.cc
// Calculate the average score across all the results in the window.
int32_t average_scores[kCategoryCount];
for (int offset = 0; offset < previous_results_.size(); ++offset) {
PreviousResultsQueue::Result previous_result =
previous_results_.from_front(offset);
const uint8_t* scores = previous_result.scores_;
for (int i = 0; i < kCategoryCount; ++i) {
// if (offset == 0) {
if (offset == (how_many_results-1)) {
average_scores[i] = scores[i];
} else {
//average_scores[i] += scores[i];
}
}
}
//for (int i = 0; i < kCategoryCount; ++i) {
// average_scores[i] /= how_many_results;
// }