4.8 KiB
I want to perform tests of hardware. I need a good system. The main thing is a repo with python scripts. What should they output? I want to be able to trigger a test run for a PCB where i can give some infos about the PCB: Version/Revision, Adjustments (comment), Software version (git commit hash e.g.). And i want the results to be structuredtly saved, with all the information, + datetime. The test results are a latency, a THD input vs output, a SNR input vs output, in the future possible also images. But for now 4 numbers basically. What are my options to structure this nicely, run tests easily and see the results
Closed Loop Audio Test Suite
Overview
Laptop <-> Audio Interface -> Beacon -> nrf Ref Board or Scout -> Audio Interface (-> Laptop) Audio Interface -> Audio Interface (Loop back)
Compare the two paths Loop back vs Radio
Capabilities
- Measure latency (round trip)
- Measure THD input vs output
- Measure SNR input vs output
- Compare fourier transform of input vs output
For now start with detecting the audio interface and play a test tone on both channels. Record the audio on both input channels and measure the latency. (most likley 0ms right now, dont be confused) Calculate the THD and SNR for both channels.
Use the connected scarlett focusrite interface, figure out how to use it.
Play a test tone (that is not repeating) to figure out the latency between input channel 1 and input channel 2.
Visualize all the results.
Play a sine in different frequencies, and for every frequency 5 sec long and do thd of channel 1 and of channel 2, to compare the quality loss.
Dont do fourier yet.
Do a simple project.
I want you to write a new test: Put a 1khz sine into the system and record both channels for x seconds e.g. 60. I want you to detect buzzing and other artifacts in the recording. Give me a number how many artifacts you found. Make the detection algorithm configurable, so we can try different approaches.
Again input it into the audio interface and measure both loopback and radio path like in the other test.
============
Implement Matrix test
Test:
Fast / Robust
16k / 24k / 48k
Mono / Stereo
Presentation Delay 10 / 20 / 40 / 80
For each combination test:
Latency
Latency buildup yes/no
Maybe: Audio quality BUT this way test gets really long.
Plot a table with the results, also compare to 'baseline' measurement.
Use the existing tests as a guideline how to save the results.
For setting the parameters for the tests use the API: http://beacon29.local:5000/init curl -X 'POST' \ 'http://beacon29.local:5000/init' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "qos_config": { "iso_int_multiple_10ms": 1, "number_of_retransmissions": 2, "max_transport_latency_ms": 23 }, "debug": false, "device_name": "Auracaster", "transport": "", "auracast_device_address": "F0:F1:F2:F3:F4:F5", "auracast_sampling_rate_hz": 16000, "octets_per_frame": 160, "frame_duration_us": 10000, "presentation_delay_us": 10000, "manufacturer_data": [ null, null ], "immediate_rendering": false, "assisted_listening_stream": false, "bigs": [ { "id": 12, "random_address": "F1:F1:F2:F3:F4:F5", "language": "deu", "name": "Broadcast0", "program_info": "Vorlesung DE", "audio_source": "device:ch1", "input_format": "auto", "loop": true, "precode_wav": false, "iso_que_len": 1, "num_bis": 1, "input_gain_db": 0 } ], "analog_gain": 50 }'
It has to have the name Broadcast0. qos fast is "number_of_retransmissions": 2, "max_transport_latency_ms": 23 qos robust is "number_of_retransmissions": 4, "max_transport_latency_ms": 43 Mono is "num_bis": 1 Stereo is "num_bis": 2 16k is "auracast_sampling_rate_hz": 16000, "octets_per_frame": 40 24k is "auracast_sampling_rate_hz": 24000, "octets_per_frame": 60 48k is "auracast_sampling_rate_hz": 48000, "octets_per_frame": 120
The results shall be plotted as a table: Presentation delay 10 / 20 / 40 /80 Mono Stereo Mono Stereo Mono Stereo ... x Fast 16k Fast 24k Fast 48k Robust 16k Robust 24k Robust 48k
For each combination you have to run the latency test. If the test fails print fail. Else print the ms value. Optional: Also run the build up test for 20 secs. As a result just print if there is a buildup or not. Optional: Also run the quality test for 3 min per combination and display the err/min.
The result shall be saved as a yaml (like in all the other scripts). Important to save the API call aswell.
And create an image with the table.
There should be a feature to compare this measurement to a 'baseline' measurement. Failed tests should be colored red. Tests significantly worse than the baseline in orange. And better values in green. No change should be just white.