Example of "audio recognition" trained in tf-1.15.0 isn't able to recognize sound "Yes" on the board

I am blocked with running on SparkFun Edge development board with example of “audio recognition”, which is inside tensorflow source tree. I have reported the issue at https://github.com/tensorflow/tensorflow/issues/33778. Now I am wondering if you guys in SparkFun can know the answer, as well? :slight_smile:

What I hope to know is just: there is an model example of “audio recognition” in binary format (tensorflow/lite/experimental/micro/examples/micro_speech/micro_features/tiny_conv_micro_features_model_data.cc), what is the exact environment and procedure to generate the file?

With flashing the file, the board can recognize word like “Yes”. But when I went through the procedure, that you guided, and burnt the “same” file into the board, it isn’t able to recognize the same word :frowning:

Any help could you provide?

Many thanks,

I read through your issue on github and think that the Google/TF folk would be better suited to answer the question. The portion that I am familiar with (building the image for the microcontroller and flashing it) looked good. Our process for getting the voice recognition to work has never strictly included training the model (rather simply following the ‘Using AI on a Microcontroller’ tutorial that you linked to) - I’ve always just relied on some mysterious built in model. Sorry this wasn’t more helpful.

Could you take a look at https://github.com/tensorflow/tensorflow/issues/33778 when you have time :slight_smile: ? with a few efforts, the train.py has worked with a commit so that it recognized almost all sound “Yes” when “detection_level” = 150. But the existing binary model that TF repository provides is “detection_level = 200”. So, if we want to improve the model quality up to higher confidence accuracy, any idea could you have on your mind? Thanks!