through this procedure i have a set of calibrated value (getCalibrated…() function), according to my light source and parameters, so i essentially have the spectrum of my light source under certain conditions of measurement.
but later, when I make an acquisition by placing a leaf in front of the sensor, is it enough for me to directly take the value returned by getCalibrated? or do I have to perform operations on raw data?
I’ve build a pretty darn functional general purpose spectrometer that displays both absorption and refracted light that may be pretty much plug and play for your use case. It includes both a sampling function that allows for user specified gain and LED intensity, sample length (the sample method is a little different than I have seen on line but seems to work great) and a calibration script for either an all white or all black enclosure. I’ve been lazy and haven’t posted to github yet, but wil, including the enclosure STL file for 3d printing.
The way i’m calibrating is as per below. Note I am using all sensor “channels”, from viz to IR, as all are diagnostic and information-rich, but they all have their own sensitivity issues so I’ve tried to find a simple calibration approach that allows for adjustments per channel/spectral range.
Calibration Output:
• Dark Calibration: During dark calibration, the sensor is capturing ambient noise (zero-reference) when no light is present. These values are saved to a file (e.g., dark_calibration.csv) and represent a “baseline offset” to subtract from future runs.
• White Calibration: During white calibration, the sensor captures a maximum light reference. These values are saved to a file (e.g., white_calibration.csv) and represent the “upper bound” to normalize intensity readings for future runs.
This maps the raw intensity to a range between 0 and 1, where:
• 0 represents the dark (minimum intensity).
• 1 represents the white (maximum intensity).
• Values outside this range are clipped (e.g., negative values become 0).
How it works?
The calibration process adjusts the raw sensor output using the dark and white calibration files:
• Calibrated Intensity Formula:
Calibration intensity = (raw intensity - dark intensity)/(white intensity - dark intensity)
I have not measures organic stuff yet, but a boatload of minerals and gemstones and it’s remarkably accurate so far. In many ways, despite the lack of full spectrum analysis, the fact that the board measures the range it does, something no lab spectrometer does that I know of, it may even exceed lab grade for my use case (ie, not all bands are equally diagnostic, and some are not even considered in reference libraries is absorption ‘signatures’ by mineral).
I hope that helps . PM me and i can share code (several pytbon scripts orchestrated by a master shell script, plus the arduino code).
Hi all, I am preparing to post the code on GitHub, I went sort of deep with it after coming to see that the long-pole in use of low cost (or any cost) sensors for spectrometry is really this kind of thing - calibration, processing of calibrated files for analysis, matching, etc. I have a pretty respectable draft of most of that (including the matching algo) with a GUI (in PyQt5) and when I have time and work out the key bugs, I will make it available. Note the as7265x sensor is EOS, but I am still using that, and have also began using the 7343 and a toshiba cmos sensor, which I hope to make comparable with the code (ideally abstracting the code from the sensor in ways that simply shims can be written to make it sensor independent. I am looking for someone to test it out before posting, and so if anyone is interested, please PM me. Note the GUI is super simply to use, but there are maybe a dozen or so shell and python scripts involved and so some proficiency with those will be needed to run the code. You will also need an enclosure or some familiarity with creating a baseline white and dark reference (I use fixed enclosures, so I simply line the interior with Musuo black paint and sample that for the black baseline, and I made some paint with burium sulfate, titanium dioxide, and a quality matte acrylic medium and painted an object that represents the max sized, standard shaped samples and measured that as the white baseline. The code takes care of the rest. I don’t have a lot of time to spend helping someone get this up and running, and so unfortunately, I can’t help much outside of basic guidance and readme files, but I could really use the feedback and so I’ll invest the time if someone can help provide that, or ideally, help enhance the code or the reference datasets (which I will post as well, in this case for minerals, etc).
Hi @Shemesh99
I would like to see the code for the calibration. The sensor will be used to acquire spectral information of object inside the black box.
Awesome - there are a few threads on here where folks have shared their calibration routines, if you search the forums for topics on the sensor and scroll a bit you’ll probably find one that can work as a general guide as well
I want to use spectral channel (for data analysis) from the raw value of sensor (using .getA() and so on) , and using the calibrated value (.getCalibratedA() and so on) . I will have an array with 18 elements from raw value and calibrated value (let those array as getAllChannelsRaw[18] and getAllChannelsCalibrated[18] , respectively)
Now, I have an enclosure(dark as possible) that is not really light-tight but enough to considered as dark enclosure seen by a naked eye, using again the .getA()… and .getCalibratedA()… for both raw value and the calibrated value, I will obtain a 18-element array captured by the sensor in dark condition with no light source(either from sensor itself and other forms of light), I will call this array as dark[18]
now for the white calibration, for dark enclosure or non-dark enclosure, is it correct that I will put a white reference material (such as a clean white paper), and a light source (came from the sensor itself) ?
After that, for the mapping of raw intensity, and the calibrated intensity, 0 represents no reflectance from the sample under test and 1 represents a highest-possible reflectance from the sample and anything higher or lower than that will be clipped either a 0 or 1 , and to get the mapped raw intensity is:
Calibration intensity = (raw intensity - dark intensity)/(white intensity - dark intensity)
To perform a calibrated measurement more or less requires that you use the same enclosure, light source, sensor and sensor settings.
The only thing that changes between calibration and measurement is that the calibration reference material (e.g. white paper) is replaced by the sample to be measured.
meaning, regardless of the enclosure, it will be compensated by taking dark calibration and white calibration? After the calibration procedure, is it ready to place the sample inside the enclosure?
This is what I’ve got, the top left chart is the unprocessed ones, I used the calibrated spectral values (using.getCalibratedA() and so on…) and the top right chart is the normalized values of reflectance.
I used the color palette (#c5f73b) displayed by my smartphone
On the bottom left and right chart, this is the dark and white calibration. AS7265x board have enabled lights during the spectral capture
^As he said; also: you don’t want to normalize against full black and full white, but rather ‘illuminated’ and ‘not illuminated’ values within a dark testing box/apparatus (at least that’s what I did). You’re setting the bottom end of what the darkest thing it should expect as and then the brightest (all 3 LED on a full power within same apparatus with same walls and position, also each LED individually)
I usually recommend disabling the power LED (scroll down a bit and sever the refernced jumper pads’ trace), as it will produce unaccounted for light…you can disable the blue STAT LED with the library/code
I also don’t know what the phone thing is about.
Anyhow, after doing that you can then put an sample into its view and use a preferred LED spectrum (or all 3…depends on use-case) and voila, you have known background spectra that you can deduct
I used my smartphone (with lines on a LCD screen) to display a yellow green color(#c5f73b) and used it as a sample. i do not have a thing that have that color, so I used the phone
I included the dark calibration to remove the ambient noise and white calibration to normalize the illumination intensity and some spectrum imbalances
The top left and the top right chart show the reflectance of the object under test (which is the phone with yellow green color displayed on the screen) with the range from 0 to 1, anything lower than 0 or higher than 1 is clipped
from what I observe from the charts for the spectral values (top left) and reflectance (top right), the color displayed on my phone corresponds to the spectrum captured by the sensor (on the range of 510 to 560 nm).
I wonder if I am on a right track on capturing reflectance and the spectral values