Hello!
I’m interested in joining the latest Artemis contest! These contests force me to learn new things, machine learning/TensorFlow on my wish list (just read/watched all the things TesorFlow and Artemis prior to actually installing/developing something).
So far I understand that my project will be on “the edge”, inferring things before (if) contacting something on a massive network to take advanced action, if not just taking action on its own without dealing with networks at all. An algorithm will be trained on fancy machines ahead of time with available data, then exported to something (a TensorFlow or TensorFlow Lite model?) that will be imported/implemented via an Arduino sketch. Does this seem correct?
The part I’m stuck on is potential machine learning limitations on a microprocessor IoT device. Most of what I’m seeing is a lot of pre-canned models for dealing with images, something that seems out of reach for an Artemis project (wrong maybe?). Audio processing seems readily available however and I can likely infer things with “all the pins” (whatever Quiic thing(s) are connected, cool). I’m not really seeing a lot of examples about IoT models, non-image specifically. Is the purpose of this contest to start generating these for improved traction?
I’m also struggling with general development (scream when temperature is cold) vs model (I infer that the temp is colder than usual, scream now). Perhaps I just need to keep reading up?
Do you have ideas on good related examples I can look at? I’m looking for more ideas before setting on something. Maybe I just bug the lightning app submission guy and join his team (I’ve been wanting to triangulate lightning position, however maybe not with machine learning and it’s no longer raining…)
Thanks for tolerating my late night post and for any guidance in advance!