feature/network_audio #6
4
.gitignore
vendored
4
.gitignore
vendored
@@ -38,3 +38,7 @@ __pycache__/
|
||||
*/.env
|
||||
|
||||
wg_config/wg_confs/
|
||||
records/
|
||||
src/auracast/server/stream_settings.json
|
||||
src/auracast/server/certs/per_device/
|
||||
src/auracast/.env
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# TODO: investigate using -alpine in the future
|
||||
FROM python:3.11
|
||||
FROM python:3.11-bookworm
|
||||
|
||||
# Install system dependencies and poetry
|
||||
RUN apt-get update && apt-get install -y \
|
||||
iputils-ping \
|
||||
iputils-ping portaudio19-dev\
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
|
||||
194
README.md
Normal file
194
README.md
Normal file
@@ -0,0 +1,194 @@
|
||||
## Local HTTP/HTTPS Setup with Custom CA
|
||||
|
||||
This project provides a dual-port Streamlit server setup for local networks:
|
||||
|
||||
- **HTTP** available on port **8502**
|
||||
- **HTTPS** (trusted with custom CA) available on port **8503**
|
||||
|
||||
### How it works
|
||||
- A custom Certificate Authority (CA) is generated for your organization.
|
||||
- Each device/server is issued a certificate signed by this CA.
|
||||
- Customers can import the CA certificate into their OS/browser trust store, so the device's HTTPS connection is fully trusted (no browser warnings).
|
||||
|
||||
### Usage
|
||||
|
||||
1. **Generate Certificates**
|
||||
- Run `generate_ca_cert.sh` in `src/auracast/server/`.
|
||||
- This creates:
|
||||
- `certs/ca/ca_cert.pem` / `ca_key.pem` (CA cert/key)
|
||||
- **Distribute `ca_cert.pem` or `ca_cert.crt` to customers** for installation in their trust store.
|
||||
- This is a one-time operation for your organization.
|
||||
|
||||
2. **Start the Server**
|
||||
- Run `run_http_and_https.sh` in `src/auracast/server/`.
|
||||
- This starts:
|
||||
- HTTP Streamlit on port 8500
|
||||
- HTTPS Streamlit on port 8501 (using the signed device cert)
|
||||
|
||||
3. **Client Trust Setup**
|
||||
- Customers should install `ca_cert.pem` in their operating system or browser trust store to trust the HTTPS connection.
|
||||
- After this, browsers will show a secure HTTPS connection to the device (no warnings).
|
||||
|
||||
### Why this setup?
|
||||
- **WebRTC and other browser features require HTTPS for local devices.**
|
||||
- Using a local CA allows trusted HTTPS without needing a public certificate or exposing devices to the internet.
|
||||
- HTTP is also available for compatibility/testing.
|
||||
|
||||
### Advertise Hostname with mDNS
|
||||
```bash
|
||||
cd src/auracast/server
|
||||
sudo ./provision_domain_hostname.sh <new_hostname> <new_domain>
|
||||
```
|
||||
- Example:
|
||||
```bash
|
||||
sudo ./provision_domain_hostname.sh box1 auracast.local
|
||||
```
|
||||
- The script will:
|
||||
- Validate your input (no dots in hostname)
|
||||
- Set the system hostname
|
||||
- Update `/etc/hosts`
|
||||
- Set the Avahi domain in `/etc/avahi/avahi-daemon.conf`
|
||||
- Restart Avahi
|
||||
- Generate a unique per-device certificate and key signed by your CA, stored in `certs/per_device/<hostname>.<domain>/`.
|
||||
- The certificate will have a SAN matching the device's mDNS name (e.g., `box1-summitwave.local`).
|
||||
---
|
||||
|
||||
### Troubleshooting & Tips
|
||||
- **Use .local domain** (e.g., `box1-summitwave.local`) - most clients will not resolve multi-label domains.
|
||||
- **Hostnames must not contain dots** (`.`). Only use single-label names for the system hostname.
|
||||
- **Avahi domain** can be multi-label (e.g., `auracast.local`).
|
||||
- **Clients may need** `libnss-mdns` installed and `/etc/nsswitch.conf` configured with `mdns4_minimal` and `mdns4` for multi-label mDNS names.
|
||||
- If you have issues with mDNS name resolution, check for conflicting mDNS stacks (e.g., systemd-resolved, Bonjour, or other daemons).
|
||||
- Some Linux clients may not resolve multi-label mDNS names via NSS—test with `avahi-resolve-host-name` and try from another device if needed.
|
||||
|
||||
---
|
||||
|
||||
After completing these steps, your device will be discoverable as `<hostname>.<domain>` (e.g., `box1.auracast.local`) on the local network via mDNS.
|
||||
|
||||
---
|
||||
|
||||
## Checking Advertised mDNS Services
|
||||
Once your device is configured, you can verify that its mDNS advertisement is visible on the network:
|
||||
|
||||
- **List all mDNS services:**
|
||||
```bash
|
||||
avahi-browse -a
|
||||
```
|
||||
Look for your hostname and service (e.g., `box1.auracast.local`).
|
||||
- **Check specific hostname resolution:**
|
||||
```bash
|
||||
avahi-resolve-host-name box1.auracast.local
|
||||
avahi-resolve-host-name -4 box1.auracast.local # IPv4 only
|
||||
avahi-resolve-host-name -6 box1.auracast.local # IPv6 only
|
||||
```
|
||||
|
||||
## Run the application with local webui
|
||||
- for microphone streaming via the browser, https is required
|
||||
- poetry run multicast_server.py
|
||||
- sudo -E PATH="$PATH" bash ./start_frontend_https.sh
|
||||
- bash start_mdns.sh
|
||||
|
||||
|
||||
## Managing Auracast systemd Services
|
||||
|
||||
You can run the backend and frontend as systemd services for easier management and automatic startup on boot.
|
||||
|
||||
### 1. Install the service files
|
||||
Copy the provided service files to your systemd directory (requires sudo):
|
||||
|
||||
```bash
|
||||
sudo cp auracast-server.service /etc/systemd/system/
|
||||
sudo cp auracast-frontend.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
### 2. Reload systemd
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
### 3. Enable services to start at boot
|
||||
```bash
|
||||
sudo systemctl enable auracast-server
|
||||
sudo systemctl enable auracast-frontend
|
||||
```
|
||||
|
||||
### 4. Start the services
|
||||
```bash
|
||||
sudo systemctl start auracast-server
|
||||
sudo systemctl start auracast-frontend
|
||||
```
|
||||
|
||||
### 5. Stop the services
|
||||
```bash
|
||||
sudo systemctl stop auracast-server
|
||||
sudo systemctl stop auracast-frontend
|
||||
```
|
||||
|
||||
### 6. Disable services to start at boot
|
||||
```bash
|
||||
sudo systemctl disable auracast-server
|
||||
sudo systemctl disable auracast-frontend
|
||||
```
|
||||
|
||||
### 7. Check service status
|
||||
```bash
|
||||
sudo systemctl status auracast-server
|
||||
sudo systemctl status auracast-frontend
|
||||
```
|
||||
|
||||
If you want to run the services as a specific user, edit the `User=` line in the service files accordingly.
|
||||
|
||||
# Setup the audio system
|
||||
sudo apt update
|
||||
|
||||
sudo apt remove -y libportaudio2 portaudio19-dev libportaudiocpp0
|
||||
echo "y" | rpi-update stable
|
||||
|
||||
sudo apt install -y pipewire wireplumber pipewire-audio-client-libraries rtkit cpufrequtils
|
||||
cp src/service/pipewire/99-lowlatency.conf ~/.config/pipewire/pipewire.conf.d/
|
||||
sudo cpufreq-set -g performance
|
||||
|
||||
/etc/modprobe.d/usb-audio-lowlatency.conf
|
||||
option snd_usb_audio nrpacks=1
|
||||
|
||||
sudo apt install -y --no-install-recommends \
|
||||
git build-essential cmake pkg-config \
|
||||
libasound2-dev libpulse-devpipewire ethtool linuxptp
|
||||
|
||||
git clone https://github.com/PortAudio/portaudio.git
|
||||
cd portaudio
|
||||
git checkout 9abe5fe7db729280080a0bbc1397a528cd3ce658
|
||||
rm -rf build
|
||||
cmake -S . -B build -G"Unix Makefiles" \
|
||||
-DBUILD_SHARED_LIBS=ON \
|
||||
-DPA_USE_ALSA=OFF \
|
||||
-DPA_USE_PULSEAUDIO=ON \
|
||||
-DPA_USE_JACK=OFF
|
||||
cmake --build build -j$(nproc)
|
||||
sudo cmake --install build # installs to /usr/local/lib
|
||||
sudo ldconfig # refresh linker cache
|
||||
|
||||
|
||||
# Device commisioning
|
||||
- generate id_ed25519 keypair
|
||||
- setup hostname
|
||||
- sudo bash src/auracast/server/provision_domain_hostname.sh box7-summitwave local
|
||||
- activate aes67 service
|
||||
- install udev rule for ptp4l
|
||||
- sudo cp src/service/aes67/90-pipewire-aes67-ptp.rules /etc/udev/rules.d/
|
||||
- sudo udevadm control --log-priority=debug --reload-rules
|
||||
- sudo udevadm trigger
|
||||
- bash src/service/update_and_run_aes67.sh
|
||||
- poetry config virtualenvs.in-project true
|
||||
- poetry install
|
||||
- activate server and frontend
|
||||
- bash srv/service/update_and_run_server_and_frontend.sh
|
||||
- update to latest stable kernel
|
||||
- echo "y" | rpi-update stable
|
||||
- place cert
|
||||
- disable pw login
|
||||
- reboot
|
||||
|
||||
# Known issues:
|
||||
- When running on a laptop there might be issues switching between usb and browser audio input since they use the same audio device
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
services:
|
||||
multicaster:
|
||||
container_name: multicaster-test
|
||||
container_name: multicaster
|
||||
privileged: true # Grants full access to all devices (for serial access)
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
|
||||
24
docker-compose-webui.yaml
Normal file
24
docker-compose-webui.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
services:
|
||||
multicaster:
|
||||
container_name: multicast-webapp
|
||||
privileged: true # Grants full access to all devices (for serial access)
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8501:8501"
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
ssh:
|
||||
- default=~/.ssh/id_ed25519 #lappi
|
||||
#- default=~/.ssh/id_rsa #raspi
|
||||
volumes:
|
||||
- "/dev/serial:/dev/serial"
|
||||
- "/dev/snd:/dev/snd"
|
||||
#devices:
|
||||
# - /dev/serial/by-id/usb-ZEPHYR_Zephyr_HCI_UART_sample_81BD14B8D71B5662-if00
|
||||
environment:
|
||||
LOG_LEVEL: INFO
|
||||
|
||||
# start the server and the frontend
|
||||
command: >
|
||||
bash -c "python ./auracast/server/multicast_server.py & streamlit run ./auracast/server/multicast_frontend.py"
|
||||
|
||||
1323
poetry.lock
generated
1323
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -6,13 +6,16 @@ requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"bumble @ git+ssh://git@ssh.pstruebi.xyz:222/auracaster/bumble_mirror.git@12bcdb7770c0d57a094bc0a96cd52e701f97fece",
|
||||
"lc3 @ git+ssh://git@ssh.pstruebi.xyz:222/auracaster/liblc3.git@7558637303106c7ea971e7bb8cedf379d3e08bcc",
|
||||
"sounddevice",
|
||||
"aioconsole",
|
||||
"fastapi==0.115.11",
|
||||
"uvicorn==0.34.0",
|
||||
"aiohttp==3.9.3",
|
||||
"sounddevice (>=0.5.1,<0.6.0)",
|
||||
"aioconsole (>=0.8.1,<0.9.0)"
|
||||
"aioconsole (>=0.8.1,<0.9.0)",
|
||||
"numpy (>=2.2.6,<3.0.0)",
|
||||
"streamlit (>=1.45.1,<2.0.0)",
|
||||
"aiortc (>=1.13.0,<2.0.0)",
|
||||
"sounddevice (>=0.5.2,<0.6.0)",
|
||||
"python-dotenv (>=1.1.1,<2.0.0)"
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@@ -35,6 +35,10 @@ class AuracastGlobalConfig(BaseModel):
|
||||
presentation_delay_us: int = 40000
|
||||
# TODO:pydantic does not support bytes serialization - use .hex and np.fromhex()
|
||||
manufacturer_data: tuple[int, bytes] | tuple[None, None] = (None, None)
|
||||
# LE Audio: Broadcast Audio Immediate Rendering (metadata type 0x09)
|
||||
# When true, include a zero-length LTV with type 0x09 in the subgroup metadata
|
||||
# so receivers may render earlier than the presentation delay for lower latency.
|
||||
immediate_rendering: bool = False
|
||||
|
||||
# "Audio input. "
|
||||
# "'device' -> use the host's default sound input device, "
|
||||
@@ -58,42 +62,52 @@ class AuracastBigConfig(BaseModel):
|
||||
class AuracastBigConfigDeu(AuracastBigConfig):
|
||||
id: int = 12
|
||||
random_address: str = 'F1:F1:F2:F3:F4:F5'
|
||||
name: str = 'Broadcast0'
|
||||
name: str = 'Hörsaal A'
|
||||
language: str ='deu'
|
||||
program_info: str = 'Announcements German'
|
||||
audio_source: str = 'file:./testdata/announcement_de.wav'
|
||||
program_info: str = 'Vorlesung DE'
|
||||
audio_source: str = 'file:./testdata/wave_particle_5min_de.wav'
|
||||
|
||||
class AuracastBigConfigEng(AuracastBigConfig):
|
||||
id: int = 123
|
||||
random_address: str = 'F2:F1:F2:F3:F4:F5'
|
||||
name: str = 'Broadcast1'
|
||||
name: str = 'Lecture Hall A'
|
||||
language: str ='eng'
|
||||
program_info: str = 'Announcements English'
|
||||
audio_source: str = 'file:./testdata/announcement_en.wav'
|
||||
program_info: str = 'Lecture EN'
|
||||
audio_source: str = 'file:./testdata/wave_particle_5min_en.wav'
|
||||
|
||||
class AuracastBigConfigFra(AuracastBigConfig):
|
||||
id: int = 1234
|
||||
random_address: str = 'F3:F1:F2:F3:F4:F5'
|
||||
name: str = 'Broadcast2'
|
||||
# French
|
||||
name: str = 'Auditoire A'
|
||||
language: str ='fra'
|
||||
program_info: str = 'Announcements French'
|
||||
audio_source: str = 'file:./testdata/announcement_fr.wav'
|
||||
program_info: str = 'Auditoire FR'
|
||||
audio_source: str = 'file:./testdata/wave_particle_5min_fr.wav'
|
||||
|
||||
class AuracastBigConfigSpa(AuracastBigConfig):
|
||||
id: int =12345
|
||||
random_address: str = 'F4:F1:F2:F3:F4:F5'
|
||||
name: str = 'Broadcast3'
|
||||
name: str = 'Auditorio A'
|
||||
language: str ='spa'
|
||||
program_info: str = 'Announcements Spanish'
|
||||
audio_source: str = 'file:./testdata/announcement_es.wav'
|
||||
program_info: str = 'Auditorio ES'
|
||||
audio_source: str = 'file:./testdata/wave_particle_5min_es.wav'
|
||||
|
||||
class AuracastBigConfigIta(AuracastBigConfig):
|
||||
id: int =1234567
|
||||
random_address: str = 'F5:F1:F2:F3:F4:F5'
|
||||
name: str = 'Broadcast4'
|
||||
name: str = 'Aula A'
|
||||
language: str ='ita'
|
||||
program_info: str = 'Announcements Italian'
|
||||
audio_source: str = 'file:./testdata/announcement_it.wav'
|
||||
program_info: str = 'Aula IT'
|
||||
audio_source: str = 'file:./testdata/wave_particle_5min_it.wav'
|
||||
|
||||
|
||||
class AuracastBigConfigPol(AuracastBigConfig):
|
||||
id: int =12345678
|
||||
random_address: str = 'F6:F1:F2:F3:F4:F5'
|
||||
name: str = 'Sala Wykładowa'
|
||||
language: str ='pol'
|
||||
program_info: str = 'Sala Wykładowa PL'
|
||||
audio_source: str = 'file:./testdata/wave_particle_5min_pl.wav'
|
||||
|
||||
|
||||
class AuracastConfigGroup(AuracastGlobalConfig):
|
||||
|
||||
@@ -44,13 +44,18 @@ from bumble.profiles import bass
|
||||
import bumble.device
|
||||
import bumble.transport
|
||||
import bumble.utils
|
||||
import numpy as np # for audio down-mix
|
||||
from bumble.device import Host, BIGInfoAdvertisement, AdvertisingChannelMap
|
||||
from bumble.audio import io as audio_io
|
||||
|
||||
from auracast import auracast_config
|
||||
from auracast.utils.read_lc3_file import read_lc3_file
|
||||
from auracast.utils.network_audio_receiver import NetworkAudioReceiverUncoded
|
||||
from auracast.utils.webrtc_audio_input import WebRTCAudioInput
|
||||
|
||||
|
||||
# Instantiate WebRTC audio input for streaming (can be used per-BIG or globally)
|
||||
|
||||
# modified from bumble
|
||||
class ModWaveAudioInput(audio_io.ThreadedAudioInput):
|
||||
"""Audio input that reads PCM samples from a .wav file."""
|
||||
@@ -148,6 +153,30 @@ async def init_broadcast(
|
||||
bap_sampling_freq = getattr(bap.SamplingFrequency, f"FREQ_{global_config.auracast_sampling_rate_hz}")
|
||||
bigs = {}
|
||||
for i, conf in enumerate(big_config):
|
||||
metadata=le_audio.Metadata(
|
||||
[
|
||||
le_audio.Metadata.Entry(
|
||||
tag=le_audio.Metadata.Tag.LANGUAGE, data=conf.language.encode()
|
||||
),
|
||||
le_audio.Metadata.Entry(
|
||||
tag=le_audio.Metadata.Tag.PROGRAM_INFO, data=conf.program_info.encode()
|
||||
),
|
||||
le_audio.Metadata.Entry(
|
||||
tag=le_audio.Metadata.Tag.BROADCAST_NAME, data=conf.name.encode()
|
||||
),
|
||||
]
|
||||
+ (
|
||||
[
|
||||
# Broadcast Audio Immediate Rendering flag (type 0x09), zero-length value
|
||||
le_audio.Metadata.Entry(tag = le_audio.Metadata.Tag.BROADCAST_AUDIO_IMMEDIATE_RENDERING_FLAG, data=b"")
|
||||
]
|
||||
if global_config.immediate_rendering #TODO: verify this
|
||||
else []
|
||||
)
|
||||
)
|
||||
logging.info(
|
||||
metadata.pretty_print("\n")
|
||||
)
|
||||
bigs[f'big{i}'] = {}
|
||||
# Config advertising set
|
||||
bigs[f'big{i}']['basic_audio_announcement'] = bap.BasicAudioAnnouncement(
|
||||
@@ -160,19 +189,7 @@ async def init_broadcast(
|
||||
frame_duration=bap.FrameDuration.DURATION_10000_US,
|
||||
octets_per_codec_frame=global_config.octets_per_frame,
|
||||
),
|
||||
metadata=le_audio.Metadata(
|
||||
[
|
||||
le_audio.Metadata.Entry(
|
||||
tag=le_audio.Metadata.Tag.LANGUAGE, data=conf.language.encode()
|
||||
),
|
||||
le_audio.Metadata.Entry(
|
||||
tag=le_audio.Metadata.Tag.PROGRAM_INFO, data=conf.program_info.encode()
|
||||
),
|
||||
le_audio.Metadata.Entry(
|
||||
tag=le_audio.Metadata.Tag.BROADCAST_NAME, data=conf.name.encode()
|
||||
),
|
||||
]
|
||||
),
|
||||
metadata=metadata,
|
||||
bis=[
|
||||
bap.BasicAudioAnnouncement.BIS(
|
||||
index=1,
|
||||
@@ -211,7 +228,7 @@ async def init_broadcast(
|
||||
primary_advertising_interval_max=200,
|
||||
advertising_sid=i,
|
||||
primary_advertising_phy=hci.Phy.LE_1M, # 2m phy config throws error - because for primary advertising channels, 1mbit is only supported
|
||||
secondary_advertising_phy=hci.Phy.LE_2M, # this is the secondary advertising beeing send on non advertising channels (extendend advertising)
|
||||
secondary_advertising_phy=hci.Phy.LE_1M, # this is the secondary advertising beeing send on non advertising channels (extendend advertising)
|
||||
#advertising_tx_power= # tx power in dbm (max 20)
|
||||
#secondary_advertising_max_skip=10,
|
||||
),
|
||||
@@ -272,7 +289,7 @@ async def init_broadcast(
|
||||
|
||||
logging.debug(f'big{i} parameters are:')
|
||||
logging.debug('%s', pprint.pformat(vars(big)))
|
||||
logging.debug(f'Finished setup of big{i}.')
|
||||
logging.info(f'Finished setup of big{i}.')
|
||||
|
||||
await asyncio.sleep(i+1) # Wait for advertising to set up
|
||||
|
||||
@@ -322,26 +339,75 @@ class Streamer():
|
||||
else:
|
||||
logging.warning('Streamer is already running')
|
||||
|
||||
def stop_streaming(self):
|
||||
"""Stops the background task if running."""
|
||||
if self.is_streaming:
|
||||
self.is_streaming = False
|
||||
if self.task:
|
||||
self.task.cancel() # Cancel the task safely
|
||||
self.task = None
|
||||
async def stop_streaming(self):
|
||||
"""Gracefully stop streaming and release audio devices."""
|
||||
if not self.is_streaming and self.task is None:
|
||||
return
|
||||
|
||||
# Ask the streaming loop to finish
|
||||
self.is_streaming = False
|
||||
if self.task is not None:
|
||||
self.task.cancel()
|
||||
|
||||
self.task = None
|
||||
|
||||
# Close audio inputs (await to ensure ALSA devices are released)
|
||||
close_tasks = []
|
||||
for big in self.bigs.values():
|
||||
ai = big.get("audio_input")
|
||||
if ai and hasattr(ai, "close"):
|
||||
close_tasks.append(ai.close())
|
||||
# Remove reference so a fresh one is created next time
|
||||
big.pop("audio_input", None)
|
||||
if close_tasks:
|
||||
await asyncio.gather(*close_tasks, return_exceptions=True)
|
||||
|
||||
async def stream(self):
|
||||
|
||||
bigs = self.bigs
|
||||
big_config = self.big_config
|
||||
global_config = self.global_config
|
||||
# init
|
||||
for i, big in enumerate(bigs.values()):
|
||||
audio_source = big_config[i].audio_source
|
||||
input_format = big_config[i].input_format
|
||||
|
||||
# --- New: network_uncoded mode using NetworkAudioReceiver ---
|
||||
if isinstance(audio_source, NetworkAudioReceiverUncoded):
|
||||
# Start the UDP receiver coroutine so packets are actually received
|
||||
asyncio.create_task(audio_source.receive())
|
||||
encoder = lc3.Encoder(
|
||||
frame_duration_us=global_config.frame_duration_us,
|
||||
sample_rate_hz=global_config.auracast_sampling_rate_hz,
|
||||
num_channels=1,
|
||||
input_sample_rate_hz=audio_source.samplerate,
|
||||
)
|
||||
lc3_frame_samples = encoder.get_frame_samples()
|
||||
big['pcm_bit_depth'] = 16
|
||||
big['lc3_frame_samples'] = lc3_frame_samples
|
||||
big['lc3_bytes_per_frame'] = global_config.octets_per_frame
|
||||
big['audio_input'] = audio_source
|
||||
big['encoder'] = encoder
|
||||
big['precoded'] = False
|
||||
|
||||
elif audio_source == 'webrtc':
|
||||
big['audio_input'] = WebRTCAudioInput()
|
||||
encoder = lc3.Encoder(
|
||||
frame_duration_us=global_config.frame_duration_us,
|
||||
sample_rate_hz=global_config.auracast_sampling_rate_hz,
|
||||
num_channels=1,
|
||||
input_sample_rate_hz=48000, # TODO: get samplerate from webrtc
|
||||
)
|
||||
lc3_frame_samples = encoder.get_frame_samples()
|
||||
big['pcm_bit_depth'] = 16
|
||||
big['lc3_frame_samples'] = lc3_frame_samples
|
||||
big['lc3_bytes_per_frame'] = global_config.octets_per_frame
|
||||
big['encoder'] = encoder
|
||||
big['precoded'] = False
|
||||
|
||||
# precoded lc3 from ram
|
||||
if isinstance(big_config[i].audio_source, bytes):
|
||||
elif isinstance(big_config[i].audio_source, bytes):
|
||||
big['precoded'] = True
|
||||
big['lc3_bytes_per_frame'] = global_config.octets_per_frame
|
||||
|
||||
lc3_frames = iter(big_config[i].audio_source)
|
||||
|
||||
@@ -352,6 +418,7 @@ class Streamer():
|
||||
# precoded lc3 file
|
||||
elif big_config[i].audio_source.endswith('.lc3'):
|
||||
big['precoded'] = True
|
||||
big['lc3_bytes_per_frame'] = global_config.octets_per_frame
|
||||
filename = big_config[i].audio_source.replace('file:', '')
|
||||
|
||||
lc3_bytes = read_lc3_file(filename)
|
||||
@@ -363,21 +430,23 @@ class Streamer():
|
||||
|
||||
# use wav files and code them entirely before streaming
|
||||
elif big_config[i].precode_wav and big_config[i].audio_source.endswith('.wav'):
|
||||
logging.info('Precoding wav file: %s, this may take a while', big_config[i].audio_source)
|
||||
big['precoded'] = True
|
||||
big['lc3_bytes_per_frame'] = global_config.octets_per_frame
|
||||
|
||||
audio_input = await audio_io.create_audio_input(audio_source, input_format)
|
||||
audio_input.rewind = False
|
||||
pcm_format = await audio_input.open()
|
||||
|
||||
if pcm_format.channels != 1:
|
||||
print("Only 1 channels PCM configurations are supported")
|
||||
logging.error("Only 1 channels PCM configurations are supported")
|
||||
return
|
||||
if pcm_format.sample_type == audio_io.PcmFormat.SampleType.INT16:
|
||||
pcm_bit_depth = 16
|
||||
elif pcm_format.sample_type == audio_io.PcmFormat.SampleType.FLOAT32:
|
||||
pcm_bit_depth = None
|
||||
else:
|
||||
print("Only INT16 and FLOAT32 sample types are supported")
|
||||
logging.error("Only INT16 and FLOAT32 sample types are supported")
|
||||
return
|
||||
encoder = lc3.Encoder(
|
||||
frame_duration_us=global_config.frame_duration_us,
|
||||
@@ -402,40 +471,92 @@ class Streamer():
|
||||
# anything else, e.g. realtime stream from device (bumble)
|
||||
else:
|
||||
audio_input = await audio_io.create_audio_input(audio_source, input_format)
|
||||
audio_input.rewind = big_config[i].loop
|
||||
pcm_format = await audio_input.open()
|
||||
# Store early so stop_streaming can close even if open() fails
|
||||
big['audio_input'] = audio_input
|
||||
# SoundDeviceAudioInput (used for `mic:<device>` captures) has no `.rewind`.
|
||||
if hasattr(audio_input, "rewind"):
|
||||
audio_input.rewind = big_config[i].loop
|
||||
|
||||
#try:
|
||||
if pcm_format.channels != 1:
|
||||
print("Only 1 channels PCM configurations are supported")
|
||||
# Retry logic – ALSA sometimes keeps the device busy for a short time after the
|
||||
# previous stream has closed. Handle PortAudioError -9985 with back-off retries.
|
||||
import sounddevice as _sd
|
||||
max_attempts = 3
|
||||
for attempt in range(1, max_attempts + 1):
|
||||
try:
|
||||
pcm_format = await audio_input.open()
|
||||
break # success
|
||||
except _sd.PortAudioError as err:
|
||||
# -9985 == paDeviceUnavailable
|
||||
logging.error('Could not open audio device %s with error %s', audio_source, err)
|
||||
code = None
|
||||
if hasattr(err, 'errno'):
|
||||
code = err.errno
|
||||
elif len(err.args) > 1 and isinstance(err.args[1], int):
|
||||
code = err.args[1]
|
||||
if code == -9985 and attempt < max_attempts:
|
||||
backoff_ms = 200 * attempt
|
||||
logging.warning("PortAudio device busy (attempt %d/%d). Retrying in %.1f ms…", attempt, max_attempts, backoff_ms)
|
||||
# ensure device handle and PortAudio context are closed before retrying
|
||||
try:
|
||||
if hasattr(audio_input, "aclose"):
|
||||
await audio_input.aclose()
|
||||
elif hasattr(audio_input, "close"):
|
||||
audio_input.close()
|
||||
except Exception:
|
||||
pass
|
||||
# Fully terminate PortAudio to drop lingering handles (sounddevice quirk)
|
||||
if hasattr(_sd, "_terminate"):
|
||||
try:
|
||||
_sd._terminate()
|
||||
except Exception:
|
||||
pass
|
||||
# Small pause then re-initialize PortAudio
|
||||
await asyncio.sleep(0.1)
|
||||
if hasattr(_sd, "_initialize"):
|
||||
try:
|
||||
_sd._initialize()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Back-off before next attempt
|
||||
await asyncio.sleep(backoff_ms / 1000)
|
||||
# Recreate audio_input fresh for next attempt
|
||||
audio_input = await audio_io.create_audio_input(audio_source, input_format)
|
||||
continue
|
||||
# Other errors or final attempt – re-raise so caller can abort gracefully
|
||||
raise
|
||||
else:
|
||||
# Loop exhausted without break
|
||||
logging.error("Unable to open audio device after %d attempts – giving up", max_attempts)
|
||||
return
|
||||
|
||||
if pcm_format.channels != 1:
|
||||
logging.info("Input device provides %d channels – will down-mix to mono for LC3", pcm_format.channels)
|
||||
if pcm_format.sample_type == audio_io.PcmFormat.SampleType.INT16:
|
||||
pcm_bit_depth = 16
|
||||
elif pcm_format.sample_type == audio_io.PcmFormat.SampleType.FLOAT32:
|
||||
pcm_bit_depth = None
|
||||
else:
|
||||
print("Only INT16 and FLOAT32 sample types are supported")
|
||||
logging.error("Only INT16 and FLOAT32 sample types are supported")
|
||||
return
|
||||
|
||||
encoder = lc3.Encoder(
|
||||
frame_duration_us=global_config.frame_duration_us,
|
||||
sample_rate_hz=global_config.auracast_sampling_rate_hz,
|
||||
num_channels=1,
|
||||
input_sample_rate_hz=pcm_format.sample_rate,
|
||||
)
|
||||
|
||||
lc3_frame_samples = encoder.get_frame_samples() # number of the pcm samples per lc3 frame
|
||||
|
||||
big['pcm_bit_depth'] = pcm_bit_depth
|
||||
big['channels'] = pcm_format.channels
|
||||
big['lc3_frame_samples'] = lc3_frame_samples
|
||||
big['lc3_bytes_per_frame'] = global_config.octets_per_frame
|
||||
big['audio_input'] = audio_input
|
||||
big['encoder'] = encoder
|
||||
big['precoded'] = False
|
||||
|
||||
# Need for coded an uncoded audio
|
||||
lc3_frame_size = global_config.octets_per_frame #encoder.get_frame_bytes(bitrate)
|
||||
lc3_bytes_per_frame = lc3_frame_size #* 2 #multiplied by number of channels
|
||||
big['lc3_bytes_per_frame'] = lc3_bytes_per_frame
|
||||
|
||||
# TODO: Maybe do some pre buffering so the stream is stable from the beginning. One half iso queue would be appropriate
|
||||
logging.info("Streaming audio...")
|
||||
bigs = self.bigs
|
||||
self.is_streaming = True
|
||||
@@ -443,7 +564,6 @@ class Streamer():
|
||||
while self.is_streaming:
|
||||
stream_finished = [False for _ in range(len(bigs))]
|
||||
for i, big in enumerate(bigs.values()):
|
||||
|
||||
if big['precoded']:# everything was already lc3 coded beforehand
|
||||
lc3_frame = bytes(
|
||||
itertools.islice(big['lc3_frames'], big['lc3_bytes_per_frame'])
|
||||
@@ -452,13 +572,26 @@ class Streamer():
|
||||
if lc3_frame == b'': # Not all streams may stop at the same time
|
||||
stream_finished[i] = True
|
||||
continue
|
||||
else:
|
||||
else: # code lc3 on the fly
|
||||
pcm_frame = await anext(big['audio_input'].frames(big['lc3_frame_samples']), None)
|
||||
|
||||
if pcm_frame is None: # Not all streams may stop at the same time
|
||||
stream_finished[i] = True
|
||||
continue
|
||||
|
||||
# Down-mix multi-channel PCM to mono for LC3 encoder if needed
|
||||
if big.get('channels', 1) > 1:
|
||||
if isinstance(pcm_frame, np.ndarray):
|
||||
if pcm_frame.ndim > 1:
|
||||
mono = pcm_frame.mean(axis=1).astype(pcm_frame.dtype)
|
||||
pcm_frame = mono
|
||||
else:
|
||||
# Convert raw bytes to numpy, average channels, convert back
|
||||
dtype = np.int16 if big['pcm_bit_depth'] == 16 else np.float32
|
||||
samples = np.frombuffer(pcm_frame, dtype=dtype)
|
||||
samples = samples.reshape(-1, big['channels']).mean(axis=1)
|
||||
pcm_frame = samples.astype(dtype).tobytes()
|
||||
|
||||
lc3_frame = big['encoder'].encode(
|
||||
pcm_frame, num_bytes=big['lc3_bytes_per_frame'], bit_depth=big['pcm_bit_depth']
|
||||
)
|
||||
@@ -511,13 +644,12 @@ async def broadcast(global_conf: auracast_config.AuracastGlobalConfig, big_conf:
|
||||
if __name__ == "__main__":
|
||||
import os
|
||||
|
||||
logging.basicConfig( #export LOG_LEVEL=INFO
|
||||
level=os.environ.get('LOG_LEVEL', logging.DEBUG),
|
||||
logging.basicConfig( #export LOG_LEVEL=DEBUG
|
||||
level=os.environ.get('LOG_LEVEL', logging.INFO),
|
||||
format='%(module)s.py:%(lineno)d %(levelname)s: %(message)s'
|
||||
)
|
||||
os.chdir(os.path.dirname(__file__))
|
||||
|
||||
|
||||
config = auracast_config.AuracastConfigGroup(
|
||||
bigs = [
|
||||
auracast_config.AuracastBigConfigDeu(),
|
||||
@@ -537,15 +669,19 @@ if __name__ == "__main__":
|
||||
#config.transport='serial:/dev/serial/by-id/usb-ZEPHYR_Zephyr_HCI_UART_sample_95A087EADB030B24-if00,115200,rtscts' #nrf52dongle hci_uart usb cdc
|
||||
#config.transport='usb:2fe3:000b' #nrf52dongle hci_usb # TODO: iso packet over usb not supported
|
||||
#config.transport= 'auto'
|
||||
config.transport='serial:/dev/ttyAMA2,1000000,rtscts' # transport for raspberry pi
|
||||
config.transport='serial:/dev/ttyAMA4,1000000,rtscts' # transport for raspberry pi
|
||||
|
||||
# TODO: encrypted streams are not working
|
||||
|
||||
for big in config.bigs: # TODO: encrypted streams are not working
|
||||
for big in config.bigs:
|
||||
#big.code = 'ff'*16 # returns hci/HCI_ENCRYPTION_MODE_NOT_ACCEPTABLE_ERROR
|
||||
#big.code = '78 e5 dc f1 34 ab 42 bf c1 92 ef dd 3a fd 67 ae'
|
||||
big.precode_wav = True
|
||||
big.audio_source = big.audio_source.replace('.wav', '_10_16_32.lc3') #lc3 precoded files
|
||||
big.audio_source = read_lc3_file(big.audio_source) # load files in advance
|
||||
big.precode_wav = False
|
||||
#big.audio_source = big.audio_source.replace('.wav', '_10_16_32.lc3') #lc3 precoded files
|
||||
#big.audio_source = read_lc3_file(big.audio_source) # load files in advance
|
||||
|
||||
# --- Network_uncoded mode using NetworkAudioReceiver ---
|
||||
#big.audio_source = NetworkAudioReceiverUncoded(port=50007, samplerate=16000, channels=1, chunk_size=1024)
|
||||
|
||||
# 16kHz works reliably with 3 streams
|
||||
# 24kHz is only working with 2 streams - probably airtime constraint
|
||||
|
||||
@@ -52,13 +52,19 @@ class Multicaster:
|
||||
self.device = device
|
||||
self.is_auracast_init = True
|
||||
|
||||
def start_streaming(self):
|
||||
async def start_streaming(self):
|
||||
"""Start streaming; if an old stream is running, stop it first to release audio devices."""
|
||||
if self.streamer is not None:
|
||||
await self.stop_streaming()
|
||||
# Brief pause to ensure ALSA/PortAudio fully releases the input device
|
||||
await asyncio.sleep(0.5)
|
||||
self.streamer = multicast.Streamer(self.bigs, self.global_conf, self.big_conf)
|
||||
self.streamer.start_streaming()
|
||||
|
||||
def stop_streaming(self):
|
||||
|
||||
async def stop_streaming(self):
|
||||
if self.streamer is not None:
|
||||
self.streamer.stop_streaming()
|
||||
await self.streamer.stop_streaming()
|
||||
self.streamer = None
|
||||
|
||||
async def reset(self):
|
||||
@@ -66,18 +72,28 @@ class Multicaster:
|
||||
self.__init__(self.global_conf, self.big_conf)
|
||||
|
||||
async def shutdown(self):
|
||||
# Ensure streaming is fully stopped before tearing down Bluetooth resources
|
||||
if self.streamer is not None:
|
||||
await self.stop_streaming()
|
||||
|
||||
self.is_auracast_init = False
|
||||
self. is_audio_init = False
|
||||
self.is_audio_init = False
|
||||
|
||||
if self.bigs:
|
||||
for big in self.bigs.values():
|
||||
if big.get('audio_input'):
|
||||
if hasattr(big['audio_input'], 'aclose'):
|
||||
await big['audio_input'].aclose()
|
||||
|
||||
if self.device:
|
||||
await self.device.stop_advertising()
|
||||
if self.bigs:
|
||||
for big in self.bigs.values():
|
||||
if big['advertising_set']:
|
||||
if big.get('advertising_set'):
|
||||
await big['advertising_set'].stop()
|
||||
await self.device_acm.__aexit__(None, None, None) # Manually triggering teardown
|
||||
|
||||
|
||||
|
||||
# example commandline ui
|
||||
async def command_line_ui(caster: Multicaster):
|
||||
while True:
|
||||
|
||||
155
src/auracast/multicast_script.py
Normal file
155
src/auracast/multicast_script.py
Normal file
@@ -0,0 +1,155 @@
|
||||
"""
|
||||
multicast_script
|
||||
=================
|
||||
|
||||
Loads environment variables from a .env file located next to this script
|
||||
and configures the multicast broadcast. Only UPPERCASE keys are read.
|
||||
|
||||
Environment variables
|
||||
---------------------
|
||||
- LOG_LEVEL: Logging level for the script.
|
||||
Default: INFO. Examples: DEBUG, INFO, WARNING, ERROR.
|
||||
|
||||
- INPUT: Select audio capture source.
|
||||
Values:
|
||||
- "usb" (default): first available USB input device.
|
||||
- "aes67": select AES67 inputs. Two forms:
|
||||
* INPUT=aes67 -> first available AES67 input.
|
||||
* INPUT=aes67,<substr> -> case-insensitive substring match against
|
||||
the device name, e.g. INPUT=aes67,8f6326.
|
||||
|
||||
- BROADCAST_NAME: Name of the broadcast (Auracast BIG name).
|
||||
Default: "Broadcast0".
|
||||
|
||||
- PROGRAM_INFO: Free-text program/broadcast info.
|
||||
Default: "Some Announcements".
|
||||
|
||||
- LANGUATE: ISO 639-3 language code used by config (intentional key name).
|
||||
Default: "deu".
|
||||
|
||||
- PULSE_LATENCY_MSEC: Pulse/PipeWire latency hint in milliseconds.
|
||||
Default: 3.
|
||||
|
||||
Examples (.env)
|
||||
---------------
|
||||
LOG_LEVEL=DEBUG
|
||||
INPUT=aes67,8f6326
|
||||
BROADCAST_NAME=MyBroadcast
|
||||
PROGRAM_INFO="Live announcements"
|
||||
LANGUATE=deu
|
||||
"""
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from dotenv import load_dotenv
|
||||
from auracast import multicast
|
||||
from auracast import auracast_config
|
||||
from auracast.utils.sounddevice_utils import list_usb_pw_inputs, list_network_pw_inputs
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
logging.basicConfig( #export LOG_LEVEL=DEBUG
|
||||
level=os.environ.get('LOG_LEVEL', logging.INFO),
|
||||
format='%(module)s.py:%(lineno)d %(levelname)s: %(message)s'
|
||||
)
|
||||
os.chdir(os.path.dirname(__file__))
|
||||
# Load .env located next to this script (only uppercase keys will be referenced)
|
||||
load_dotenv(dotenv_path='.env')
|
||||
|
||||
os.environ.setdefault("PULSE_LATENCY_MSEC", "3")
|
||||
|
||||
usb_inputs = list_usb_pw_inputs()
|
||||
logging.info("USB pw inputs:")
|
||||
for i, d in usb_inputs:
|
||||
logging.info(f"{i}: {d['name']} in={d['max_input_channels']}")
|
||||
|
||||
aes67_inputs = list_network_pw_inputs()
|
||||
logging.info("AES67 pw inputs:")
|
||||
for i, d in aes67_inputs:
|
||||
logging.info(f"{i}: {d['name']} in={d['max_input_channels']}")
|
||||
|
||||
# Input selection (usb | aes67). Default to usb.
|
||||
# Allows specifying an AES67 device by substring: INPUT=aes67,<substring>
|
||||
# Example: INPUT=aes67,8f6326 will match a device name containing "8f6326".
|
||||
input_env = os.environ.get('INPUT', 'usb') or 'usb'
|
||||
parts = [p.strip() for p in input_env.split(',', 1)]
|
||||
input_mode = (parts[0] or 'usb').lower()
|
||||
iface_substr = (parts[1].lower() if len(parts) > 1 and parts[1] else None)
|
||||
|
||||
selected_dev = None
|
||||
if input_mode == 'aes67':
|
||||
if not aes67_inputs and not iface_substr:
|
||||
# No AES67 inputs and no specific target -> fail fast
|
||||
raise RuntimeError("No AES67 audio inputs found.")
|
||||
if iface_substr:
|
||||
# Loop until a matching AES67 input becomes available
|
||||
while True:
|
||||
current = list_network_pw_inputs()
|
||||
sel = next(((i, d) for i, d in current if iface_substr in (d.get('name','').lower())), None)
|
||||
if sel:
|
||||
input_sel = sel[0]
|
||||
selected_dev = sel[1]
|
||||
logging.info(f"Selected AES67 input by match '{iface_substr}': index={input_sel}")
|
||||
break
|
||||
logging.info(f"Waiting for AES67 input matching '{iface_substr}'... retrying in 2s")
|
||||
time.sleep(2)
|
||||
else:
|
||||
input_sel, selected_dev = aes67_inputs[0]
|
||||
logging.info(f"Selected first AES67 input: index={input_sel}, device={selected_dev['name']}")
|
||||
else:
|
||||
if usb_inputs:
|
||||
input_sel, selected_dev = usb_inputs[0]
|
||||
logging.info(f"Selected first USB input: index={input_sel}, device={selected_dev['name']}")
|
||||
else:
|
||||
raise RuntimeError("No USB audio inputs found.")
|
||||
|
||||
TRANSPORT1 = 'serial:/dev/ttyAMA3,1000000,rtscts' # transport for raspberry pi gpio header
|
||||
TRANSPORT2 = 'serial:/dev/ttyAMA4,1000000,rtscts' # transport for raspberry pi gpio header
|
||||
# Capture at 48 kHz to avoid PipeWire resampler latency; encode LC3 at 24 kHz
|
||||
CAPTURE_SRATE = 48000
|
||||
LC3_SRATE = 24000
|
||||
OCTETS_PER_FRAME=60
|
||||
|
||||
# Read uppercase-only settings from environment/.env
|
||||
broadcast_name = os.environ.get('BROADCAST_NAME', 'Broadcast0')
|
||||
program_info = os.environ.get('PROGRAM_INFO', 'Some Announcements')
|
||||
# Note: 'LANGUATE' (typo) is intentionally used as requested, maps to config.language
|
||||
language = os.environ.get('LANGUATE', 'deu')
|
||||
|
||||
# Determine capture channel count based on selected device (prefer up to 2)
|
||||
try:
|
||||
max_in = int((selected_dev or {}).get('max_input_channels', 1))
|
||||
except Exception:
|
||||
max_in = 1
|
||||
channels = max(1, min(2, max_in))
|
||||
|
||||
config = auracast_config.AuracastConfigGroup(
|
||||
bigs = [
|
||||
auracast_config.AuracastBigConfig(
|
||||
name=broadcast_name,
|
||||
program_info=program_info,
|
||||
language=language,
|
||||
iso_que_len=1,
|
||||
audio_source=f'device:{input_sel}',
|
||||
input_format=f"int16le,{CAPTURE_SRATE},{channels}",
|
||||
sampling_frequency=LC3_SRATE,
|
||||
octets_per_frame=OCTETS_PER_FRAME,
|
||||
),
|
||||
#auracast_config.AuracastBigConfigEng(),
|
||||
],
|
||||
immediate_rendering=True,
|
||||
presentation_delay_us=40000,
|
||||
qos_config=auracast_config.AuracastQosHigh(),
|
||||
auracast_sampling_rate_hz = LC3_SRATE,
|
||||
octets_per_frame = OCTETS_PER_FRAME, # 32kbps@16kHz
|
||||
transport=TRANSPORT1
|
||||
)
|
||||
#config.debug = True
|
||||
|
||||
multicast.run_async(
|
||||
multicast.broadcast(
|
||||
config,
|
||||
config.bigs
|
||||
)
|
||||
)
|
||||
@@ -1,104 +0,0 @@
|
||||
import glob
|
||||
import logging as log
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from auracast import multicast_control, auracast_config
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# Initialize global configuration
|
||||
global_config_group = auracast_config.AuracastConfigGroup()
|
||||
|
||||
# Create multicast controller
|
||||
multicaster: multicast_control.Multicaster | None = None
|
||||
|
||||
|
||||
@app.post("/init")
|
||||
async def initialize(conf: auracast_config.AuracastConfigGroup):
|
||||
"""Initializes the broadcasters."""
|
||||
global global_config_group
|
||||
global multicaster
|
||||
try:
|
||||
if conf.transport == 'auto':
|
||||
serial_devices = glob.glob('/dev/serial/by-id/*')
|
||||
log.info('Found serial devices: %s', serial_devices)
|
||||
for device in serial_devices:
|
||||
if 'usb-ZEPHYR_Zephyr_HCI_UART_sample' in device:
|
||||
log.info('Using: %s', device)
|
||||
conf.transport = f'serial:{device},115200,rtscts'
|
||||
break
|
||||
|
||||
# check again if transport is still auto
|
||||
if conf.transport == 'auto':
|
||||
HTTPException(status_code=500, detail='No suitable transport found.')
|
||||
|
||||
# initialize the streams dict
|
||||
global_config_group = conf
|
||||
log.info(
|
||||
'Initializing multicaster with config:\n %s', conf.model_dump_json(indent=2)
|
||||
)
|
||||
multicaster = multicast_control.Multicaster(
|
||||
conf,
|
||||
conf.bigs,
|
||||
)
|
||||
await multicaster.init_broadcast()
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@app.post("/stream_lc3")
|
||||
async def send_audio(audio_data: dict[str, str]):
|
||||
"""Streams pre-coded LC3 audio."""
|
||||
if multicaster is None:
|
||||
raise HTTPException(status_code=500, detail='Auracast endpoint was never intialized')
|
||||
try:
|
||||
for big in global_config_group.bigs:
|
||||
assert big.language in audio_data, HTTPException(status_code=500, detail='language len missmatch')
|
||||
log.info('Received a send audio request for %s', big.language)
|
||||
big.audio_source = audio_data[big.language].encode('latin-1') # TODO: use base64 encoding
|
||||
|
||||
multicaster.big_conf = global_config_group.bigs
|
||||
multicaster.start_streaming()
|
||||
return {"status": "audio_sent"}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@app.post("/shutdown")
|
||||
async def shutdown():
|
||||
"""Stops broadcasting."""
|
||||
try:
|
||||
await multicaster.reset()
|
||||
return {"status": "stopped"}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@app.post("/stop_audio")
|
||||
async def stop_audio():
|
||||
"""Stops streaming."""
|
||||
try:
|
||||
multicaster.stop_streaming()
|
||||
return {"status": "stopped"}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@app.get("/status")
|
||||
async def get_status():
|
||||
"""Gets the current status of the multicaster."""
|
||||
if multicaster:
|
||||
return multicaster.get_status()
|
||||
else:
|
||||
return {
|
||||
'is_initialized': False,
|
||||
'is_streaming': False,
|
||||
}
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import uvicorn
|
||||
log.basicConfig(
|
||||
level=log.INFO,
|
||||
format='%(module)s.py:%(lineno)d %(levelname)s: %(message)s'
|
||||
)
|
||||
uvicorn.run(app, host="0.0.0.0", port=5000)
|
||||
30
src/auracast/server/certs/ca/ca_cert.crt
Normal file
30
src/auracast/server/certs/ca/ca_cert.crt
Normal file
@@ -0,0 +1,30 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFDzCCAvegAwIBAgIUJkOMN61fArjxyeFLR1u2PnoUCogwDQYJKoZIhvcNAQEL
|
||||
BQAwFzEVMBMGA1UEAwwMU3VtbWl0V2F2ZUNBMB4XDTI1MDYyOTE0MzMyOVoXDTQ1
|
||||
MDYyNDE0MzMyOVowFzEVMBMGA1UEAwwMU3VtbWl0V2F2ZUNBMIICIjANBgkqhkiG
|
||||
9w0BAQEFAAOCAg8AMIICCgKCAgEAnU1k3Yasc2mFCHNdBzf76Y7NTUIy50577fJL
|
||||
kVVjwxUsUzimj6hkeTd16o5EsmTpW19/N8o/JJ1j3ne37EB8vm0q9H7yyN0fx+Gy
|
||||
uJujmqu5ZG9+ow+kbqpxJUbRMmDkZqsF6/XHfNMUQLK5vVH219xmW/hgxdEB4o50
|
||||
jF25+jhUuolVYybhLT9AGtXhpqExmCn/o78I97+GtYNdkY8cwCt/khftM4DRDeEA
|
||||
NdyVUWHG2sWqgx0BpgyL9gH/YwfeqBrjFmhh1VbgPCdgypwRV6YHVUqPtmSL7H7q
|
||||
CmX8/ccyS6Cif9z/rsb1KwSeOgNKqV3D5DN3Qrboy9NmbWKXmhnF3Pl0EQ5f2/WS
|
||||
xN+NKo8LNyZErQ27jZ6Xn9rVBRQ4rTw5oVf5hi6bOZcW2GNIQhQomQy83ohwFDnW
|
||||
6aLsBag4/lGJFS+QpRAwIvFY4R559Ki3xndUQpvbt0KHIUNTlWddACm1tkcgXEGF
|
||||
GJRZMBcKlyNdM5cRjhMtuZljoY2nHdfouiy4SETHgFFVvIZ2uOZLikljkL7cnWqF
|
||||
0DZh9MxIZqZEoffSDRCRdlhmPITwuacGTFBNAmiGqg463rNmzcyc5JOoPUQrcSy1
|
||||
0F5Ig16tiGjpgNtqyBen0r0udEoU1bBF/kxhAQCbam/IqpTtR+ouRnnbE4ST2zV6
|
||||
IXc4mPcCAwEAAaNTMFEwHQYDVR0OBBYEFOsaKvMh7Lr+/O620X2uzHxlnvmzMB8G
|
||||
A1UdIwQYMBaAFOsaKvMh7Lr+/O620X2uzHxlnvmzMA8GA1UdEwEB/wQFMAMBAf8w
|
||||
DQYJKoZIhvcNAQELBQADggIBAAxq7hy9xgDCxwZOQX8Z5iBIK3agXUads7m91TcH
|
||||
/EzFfJsUDOpDsSi58wIXuCmiJ+qe2S+hJxghLNsqm01DosPuNLNI0gCDg+glx5z5
|
||||
ADtY0EJb7mRH+xuFC1GBdP7ve3REvfi7WC9snrqBUji/xL4VycaOyTDGOxWaHlyZ
|
||||
u876I6/+xkj5hkhM1bsbEcGZ81QnTaJyeVtHTRYaORPAb2FP2V65MTn18Pu08i4T
|
||||
bzh0KAsoDkwKvoEK24T5xFEUCuLexQ+6fabYXGro3It9VmAbrtkSyX8Z1eO7rVCu
|
||||
hsUrA6UDzTerX1pWafeftpKiH7YiOaYYOAVcqDn+WKwYq3MPafNJp8x8HV1eeWYD
|
||||
dx9HBKuvlOsoxnjJMnYusmQZyJk1EJR03najrV7HH8cyU2gfNyBwfsr6nU+FnDOX
|
||||
qL2P0nWDjBkfjQRvmG59YLDVZYhw30+lishpmMLGZGwRFCjMCHD7rAdQTB3dtCP6
|
||||
NqaGogwitIdIITBtyV1ZABoE3vQuUAKZChU+DsSKniyFitKDQrXP+rwcX5Y5/pS1
|
||||
S1s6ITgllbErKqAoeelEVkJyiWykEtrtdcD0DXTr/QY4GzXeMi9u+dMXUOt95Md2
|
||||
lQVAaFIX8QxbmHXen6GsXeHhPpPw8sXtC6rh7aqSCqqB6EDS77mjrGHXbSeBS5aq
|
||||
MklC
|
||||
-----END CERTIFICATE-----
|
||||
BIN
src/auracast/server/certs/ca/ca_cert.der
Normal file
BIN
src/auracast/server/certs/ca/ca_cert.der
Normal file
Binary file not shown.
30
src/auracast/server/certs/ca/ca_cert.pem
Normal file
30
src/auracast/server/certs/ca/ca_cert.pem
Normal file
@@ -0,0 +1,30 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFDzCCAvegAwIBAgIUJkOMN61fArjxyeFLR1u2PnoUCogwDQYJKoZIhvcNAQEL
|
||||
BQAwFzEVMBMGA1UEAwwMU3VtbWl0V2F2ZUNBMB4XDTI1MDYyOTE0MzMyOVoXDTQ1
|
||||
MDYyNDE0MzMyOVowFzEVMBMGA1UEAwwMU3VtbWl0V2F2ZUNBMIICIjANBgkqhkiG
|
||||
9w0BAQEFAAOCAg8AMIICCgKCAgEAnU1k3Yasc2mFCHNdBzf76Y7NTUIy50577fJL
|
||||
kVVjwxUsUzimj6hkeTd16o5EsmTpW19/N8o/JJ1j3ne37EB8vm0q9H7yyN0fx+Gy
|
||||
uJujmqu5ZG9+ow+kbqpxJUbRMmDkZqsF6/XHfNMUQLK5vVH219xmW/hgxdEB4o50
|
||||
jF25+jhUuolVYybhLT9AGtXhpqExmCn/o78I97+GtYNdkY8cwCt/khftM4DRDeEA
|
||||
NdyVUWHG2sWqgx0BpgyL9gH/YwfeqBrjFmhh1VbgPCdgypwRV6YHVUqPtmSL7H7q
|
||||
CmX8/ccyS6Cif9z/rsb1KwSeOgNKqV3D5DN3Qrboy9NmbWKXmhnF3Pl0EQ5f2/WS
|
||||
xN+NKo8LNyZErQ27jZ6Xn9rVBRQ4rTw5oVf5hi6bOZcW2GNIQhQomQy83ohwFDnW
|
||||
6aLsBag4/lGJFS+QpRAwIvFY4R559Ki3xndUQpvbt0KHIUNTlWddACm1tkcgXEGF
|
||||
GJRZMBcKlyNdM5cRjhMtuZljoY2nHdfouiy4SETHgFFVvIZ2uOZLikljkL7cnWqF
|
||||
0DZh9MxIZqZEoffSDRCRdlhmPITwuacGTFBNAmiGqg463rNmzcyc5JOoPUQrcSy1
|
||||
0F5Ig16tiGjpgNtqyBen0r0udEoU1bBF/kxhAQCbam/IqpTtR+ouRnnbE4ST2zV6
|
||||
IXc4mPcCAwEAAaNTMFEwHQYDVR0OBBYEFOsaKvMh7Lr+/O620X2uzHxlnvmzMB8G
|
||||
A1UdIwQYMBaAFOsaKvMh7Lr+/O620X2uzHxlnvmzMA8GA1UdEwEB/wQFMAMBAf8w
|
||||
DQYJKoZIhvcNAQELBQADggIBAAxq7hy9xgDCxwZOQX8Z5iBIK3agXUads7m91TcH
|
||||
/EzFfJsUDOpDsSi58wIXuCmiJ+qe2S+hJxghLNsqm01DosPuNLNI0gCDg+glx5z5
|
||||
ADtY0EJb7mRH+xuFC1GBdP7ve3REvfi7WC9snrqBUji/xL4VycaOyTDGOxWaHlyZ
|
||||
u876I6/+xkj5hkhM1bsbEcGZ81QnTaJyeVtHTRYaORPAb2FP2V65MTn18Pu08i4T
|
||||
bzh0KAsoDkwKvoEK24T5xFEUCuLexQ+6fabYXGro3It9VmAbrtkSyX8Z1eO7rVCu
|
||||
hsUrA6UDzTerX1pWafeftpKiH7YiOaYYOAVcqDn+WKwYq3MPafNJp8x8HV1eeWYD
|
||||
dx9HBKuvlOsoxnjJMnYusmQZyJk1EJR03najrV7HH8cyU2gfNyBwfsr6nU+FnDOX
|
||||
qL2P0nWDjBkfjQRvmG59YLDVZYhw30+lishpmMLGZGwRFCjMCHD7rAdQTB3dtCP6
|
||||
NqaGogwitIdIITBtyV1ZABoE3vQuUAKZChU+DsSKniyFitKDQrXP+rwcX5Y5/pS1
|
||||
S1s6ITgllbErKqAoeelEVkJyiWykEtrtdcD0DXTr/QY4GzXeMi9u+dMXUOt95Md2
|
||||
lQVAaFIX8QxbmHXen6GsXeHhPpPw8sXtC6rh7aqSCqqB6EDS77mjrGHXbSeBS5aq
|
||||
MklC
|
||||
-----END CERTIFICATE-----
|
||||
1
src/auracast/server/certs/ca/ca_cert.srl
Normal file
1
src/auracast/server/certs/ca/ca_cert.srl
Normal file
@@ -0,0 +1 @@
|
||||
5078804E6FBCF893D5537715FD928E46AD576ECA
|
||||
52
src/auracast/server/certs/ca/ca_key.pem
Normal file
52
src/auracast/server/certs/ca/ca_key.pem
Normal file
@@ -0,0 +1,52 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIJQQIBADANBgkqhkiG9w0BAQEFAASCCSswggknAgEAAoICAQCdTWTdhqxzaYUI
|
||||
c10HN/vpjs1NQjLnTnvt8kuRVWPDFSxTOKaPqGR5N3XqjkSyZOlbX383yj8knWPe
|
||||
d7fsQHy+bSr0fvLI3R/H4bK4m6Oaq7lkb36jD6RuqnElRtEyYORmqwXr9cd80xRA
|
||||
srm9UfbX3GZb+GDF0QHijnSMXbn6OFS6iVVjJuEtP0Aa1eGmoTGYKf+jvwj3v4a1
|
||||
g12RjxzAK3+SF+0zgNEN4QA13JVRYcbaxaqDHQGmDIv2Af9jB96oGuMWaGHVVuA8
|
||||
J2DKnBFXpgdVSo+2ZIvsfuoKZfz9xzJLoKJ/3P+uxvUrBJ46A0qpXcPkM3dCtujL
|
||||
02ZtYpeaGcXc+XQRDl/b9ZLE340qjws3JkStDbuNnpef2tUFFDitPDmhV/mGLps5
|
||||
lxbYY0hCFCiZDLzeiHAUOdbpouwFqDj+UYkVL5ClEDAi8VjhHnn0qLfGd1RCm9u3
|
||||
QochQ1OVZ10AKbW2RyBcQYUYlFkwFwqXI10zlxGOEy25mWOhjacd1+i6LLhIRMeA
|
||||
UVW8hna45kuKSWOQvtydaoXQNmH0zEhmpkSh99INEJF2WGY8hPC5pwZMUE0CaIaq
|
||||
Djres2bNzJzkk6g9RCtxLLXQXkiDXq2IaOmA22rIF6fSvS50ShTVsEX+TGEBAJtq
|
||||
b8iqlO1H6i5GedsThJPbNXohdziY9wIDAQABAoICAER+VSuyfve4HCGsXfcNNQcj
|
||||
U5jO+OxH++WFqcrsJAbnesf39Gq8N5eigxkxfo8xKn1LbVElIu52C+zsMy1PfSHL
|
||||
1jbk6iF1S2fVCmWg+5GXMaAefkVRQ9eeJqtFFUU69GkSEf+HIyhinsB3MjJR9MpU
|
||||
YUutsLGiCxCT2ALgsuDV02rv7rrATK9PicHFnL5aFQa9Tt+FiMmb33O88iq15p50
|
||||
slUyTuosrpq8/ML3PBtWGGjdRhxWLogXkX/6qbH81MJdBsGUjPkAnZ4DxX0jjNed
|
||||
5zaHw2D3kgfV0WHau9ji+i79EJTdbYW0gz+KgL0g/ssVlX0Rvd3SWDacY87AbeMQ
|
||||
b1Tl3iOXqt6nqHupxgWthAnrc81bz0NrabmKCnWCQLlYiuvJ+hN945H4uzjVh5Tx
|
||||
PS0Nf17zTZsrWQgkz/ei4SIQtg/3lBm70BSsSpu+JtFJ8P+SB64maqAhhaF4mlEk
|
||||
SA5cNaY+TKTO9up3aUWnYi/GFV2R3l+wTuNiC4QDmFZRWA4RrM0EK1HrhE+5fnxJ
|
||||
cPBU48QB+IrZOI0qoqd/8XxHyEe/qzJ7Ml7wLBMzPOyr9ST6PSmoDQrT4mxeHAVE
|
||||
ogfjJ5LjaY4kyJp/u5LsvhzF6sS5InvME2YnXXAb4nvxohPFFKY9iWDZ3W+jN6xD
|
||||
zQ40bdQDVZW6fXC+HbLBAoIBAQDQkmZYb6JhrZzSQbXoQOFevPi2odeqGCqgwLAD
|
||||
fp7ZMQisQYpcXmZtyBOWX8ZO+1O5KtXEFsdf+97rwOqMWVDmd2Q2VMSmW++Ni4U8
|
||||
HZvV2gfYZISds2PXtWVLF9UNuXZ+a+HPPDpqKenyaLJtMvr1xX2kBRsi1CMk6yLI
|
||||
tCIwh4rnDiYJYHrmIggP/w1YllCkM5k33OeFuzPnW2rY0z+Q260Cxr3ouktWJ4tz
|
||||
U7vssrZh3LtvWXvkSh7mbotON6YUXpeX2WV/E/7Kh/bm8uLZGuYVhHctvjUmYpA2
|
||||
LFk6i3Mulh0OHab3WcOQV+Dpcut6QBvS6aJsxYh/tWIsn3M3AoIBAQDBEnAzEZ2S
|
||||
cpOoXSKOaYpoQ7wnwRONckJ2FKYKA7anRX4FTW6d3E2Jb/+CmXqzcRWWSvNt7ab/
|
||||
N+fXVLi1Nc2fC5BI0hFEVvPwp9mnMH8HCG7QcHQAhjYaKS1QeCEyLCudzcNBXoR9
|
||||
OuKTQcJd9tX0oJj6GNuY76gmxH3Smgwim2fPsHX0A2kekpyqVS3zHo47oeUO0N/Q
|
||||
WWNcQ49+9T2KZXF116rjL1TDZkUHvGi6p1wSAc/J5ixQ6EagfJ72PujGBkpRTTiR
|
||||
Fl/Qp4Ldy7S7AzOeiP3/w/0j5qL0NN0ZjUnoOr8u+1WaUyxTxN4+TZG3ThIYIAK1
|
||||
UTs6VLz2gmhBAoIBABx2Dc89lIv9s+uhGeCSke5qnQnW9eX5HEAJazte2PBMV6Gh
|
||||
4+6M1y9d4QZhFV+LvjYDWV5DuXsolJfZIGh8e6SnYB5l3NvSqdLH2iuE4tIAyZdG
|
||||
yC3438P8tdDUdLdFupyvvgWYc2QvSgRRMx/hmAtXorhyFezfw9fy2jFHG29B37t9
|
||||
28TlzH+A31bHeBvBj0mI3PyZgWJnVELa366szPzIbUh2tE2Atm0QQmA/aeJ31Jlw
|
||||
FIeyT0ysrKDHLu1CfMBE1CzddpMruFYMza1gMYJswD7pb5XnYbtWMdWioZ5yjwop
|
||||
Y9ecRj90mVImG8PfcbCh9OoIBakQH3tF1hq+u2sCggEATdST/FJGlgmwMnfQ/V3Y
|
||||
WK2thM0Vh7ieyCEMyg6zK/0cjyCmzeZIL3ZBpzEdwIZ+sEZomVDrOAkeYbSafRpC
|
||||
WLH9qQ1dvpHa5pGTcQ1gt8ITgd1DNg7kcmlVBhJXN3WM46FV690hRaZePgSNSPm/
|
||||
SE0RPgiVRbKes3oUSrik2bKSB6xX8FULpDJwC04pJs+TgMCDqRRUlRXjswbdKs3L
|
||||
0CWStnGJRuoGnnp0q2itQ0lCGVQ3omkyRi9MgVebcSLtDR7uCJY7jmlZmLBeVfDP
|
||||
W3Av9+G7msY0HqvT1uQUmT9WotJDzbmtyXdr8Bz1hmIYsq87JhSJYvRrDtmoDyuE
|
||||
wQKCAQBYY7G1HIhLEeS07UZ1yMxSCLZM3rLqEfpckzy68aA48HeiockwbtShky/8
|
||||
D4pvSwZnTF9VWxXKaK46FuIULSUXc8eMSx2pCShQ5nFHa04j4ZjPwsWRrUc98AiO
|
||||
pkbSgfxmKwAHpRBlimMleG+kXz6Urr5CJVQyWMP1hXTpGR1HME1z9ZbaACwvfMJk
|
||||
0xCytMv3/m7JYiCfHRsc09sjHZQZtou0JpRczkxustxXL2wylvAjI4hNwYIl7Oj8
|
||||
yzhhDzoqUGOA8uhyXZtG6NfPMr5pBo0J/pskaHco8UNV+gjOwewHrwd7K2NZmQQj
|
||||
sKOYrVeRKuwd/MuNfkJTA8MOwLM4
|
||||
-----END PRIVATE KEY-----
|
||||
30
src/auracast/server/generate_ca_cert.sh
Normal file
30
src/auracast/server/generate_ca_cert.sh
Normal file
@@ -0,0 +1,30 @@
|
||||
#!/bin/bash
|
||||
# Script to generate a CA cert/key and a device/server cert signed by this CA
|
||||
# Outputs: ca_cert.pem, ca_key.pem, device_cert.pem, device_key.pem
|
||||
|
||||
CA_DIR=certs/ca
|
||||
mkdir -p "$CA_DIR"
|
||||
CA_CERT=$CA_DIR/ca_cert.pem
|
||||
CA_KEY=$CA_DIR/ca_key.pem
|
||||
|
||||
# Generate CA key and cert (20 year expiry)
|
||||
echo "Generating CA key and certificate (20 year expiry)..."
|
||||
openssl req -x509 -newkey rsa:4096 -days 7300 -nodes -subj "/CN=SummitWaveCA" -keyout "$CA_KEY" -out "$CA_CERT"
|
||||
|
||||
# PEM version (for most browsers)
|
||||
cp "$CA_CERT" "$CA_DIR/ca_cert.crt"
|
||||
# DER version (for Windows)
|
||||
openssl x509 -in "$CA_CERT" -outform der -out "$CA_DIR/ca_cert.der"
|
||||
|
||||
# Output summary
|
||||
echo "CA cert: $CA_CERT"
|
||||
echo "CA cert (CRT for browser import): $CA_DIR/ca_cert.crt"
|
||||
echo "CA key: $CA_KEY"
|
||||
echo "Distribute $CA_CERT or $CA_DIR/ca_cert.crt to clients to trust this device."
|
||||
echo "Keep $CA_KEY secret and offline except when signing device CSRs."
|
||||
echo "CA cert: $CA_CERT"
|
||||
echo "CA cert (CRT for browser import): $CERT_DIR/ca_cert.crt"
|
||||
echo "CA key: $CA_KEY"
|
||||
echo "Device cert: $DEVICE_CERT"
|
||||
echo "Device key: $DEVICE_KEY"
|
||||
echo "Distribute $CA_CERT or $CERT_DIR/ca_cert.crt to clients to trust this device."
|
||||
399
src/auracast/server/multicast_frontend.py
Normal file
399
src/auracast/server/multicast_frontend.py
Normal file
@@ -0,0 +1,399 @@
|
||||
# frontend/app.py
|
||||
import os
|
||||
import time
|
||||
import streamlit as st
|
||||
import requests
|
||||
from auracast import auracast_config
|
||||
import logging as log
|
||||
|
||||
# Track whether WebRTC stream is active across Streamlit reruns
|
||||
if 'stream_started' not in st.session_state:
|
||||
st.session_state['stream_started'] = False
|
||||
|
||||
# Global: desired packetization time in ms for Opus (should match backend)
|
||||
PTIME = 40
|
||||
BACKEND_URL = "http://localhost:5000"
|
||||
#TRANSPORT1 = "serial:/dev/serial/by-id/usb-ZEPHYR_Zephyr_HCI_UART_sample_B53C372677E14460-if00,115200,rtscts"
|
||||
#TRANSPORT2 = "serial:/dev/serial/by-id/usb-ZEPHYR_Zephyr_HCI_UART_sample_CC69A2912F84AE5E-if00,115200,rtscts"
|
||||
|
||||
TRANSPORT1 = 'serial:/dev/ttyAMA3,1000000,rtscts' # transport for raspberry pi gpio header
|
||||
TRANSPORT2 = 'serial:/dev/ttyAMA4,1000000,rtscts' # transport for raspberry pi gpio header
|
||||
QUALITY_MAP = {
|
||||
"High (48kHz)": {"rate": 48000, "octets": 120},
|
||||
"Good (32kHz)": {"rate": 32000, "octets": 80},
|
||||
"Medium (24kHz)": {"rate": 24000, "octets": 60},
|
||||
"Fair (16kHz)": {"rate": 16000, "octets": 40},
|
||||
}
|
||||
|
||||
# Try loading persisted settings from backend
|
||||
saved_settings = {}
|
||||
try:
|
||||
resp = requests.get(f"{BACKEND_URL}/status", timeout=1)
|
||||
if resp.status_code == 200:
|
||||
saved_settings = resp.json()
|
||||
except Exception:
|
||||
saved_settings = {}
|
||||
|
||||
st.title("🎙️ Auracast Audio Mode Control")
|
||||
|
||||
# Audio mode selection with persisted default
|
||||
options = ["Webapp", "USB/Network", "Demo"]
|
||||
saved_audio_mode = saved_settings.get("audio_mode", "Webapp")
|
||||
if saved_audio_mode not in options:
|
||||
saved_audio_mode = "Webapp"
|
||||
|
||||
audio_mode = st.selectbox(
|
||||
"Audio Mode",
|
||||
options,
|
||||
index=options.index(saved_audio_mode),
|
||||
help="Select the audio input source. Choose 'Webapp' for browser microphone, 'USB/Network' for a connected hardware device, or 'Demo' for a simulated stream."
|
||||
)
|
||||
|
||||
if audio_mode == "Demo":
|
||||
demo_stream_map = {
|
||||
"1 × 48kHz": {"quality": "High (48kHz)", "streams": 1},
|
||||
"2 × 24kHz": {"quality": "Medium (24kHz)", "streams": 2},
|
||||
"3 × 16kHz": {"quality": "Fair (16kHz)", "streams": 3},
|
||||
"2 × 48kHz": {"quality": "High (48kHz)", "streams": 2},
|
||||
"4 × 24kHz": {"quality": "Medium (24kHz)", "streams": 4},
|
||||
"6 × 16kHz": {"quality": "Fair (16kHz)", "streams": 6},
|
||||
}
|
||||
demo_options = list(demo_stream_map.keys())
|
||||
default_demo = demo_options[0]
|
||||
demo_selected = st.selectbox(
|
||||
"Demo Stream Type",
|
||||
demo_options,
|
||||
index=0,
|
||||
help="Select the demo stream configuration."
|
||||
)
|
||||
#st.info(f"Demo mode selected: {demo_selected} (Streams: {demo_stream_map[demo_selected]['streams']}, Rate: {demo_stream_map[demo_selected]['rate']} Hz)")
|
||||
# Start/Stop buttons for demo mode
|
||||
if 'demo_stream_started' not in st.session_state:
|
||||
st.session_state['demo_stream_started'] = False
|
||||
col1, col2 = st.columns(2)
|
||||
with col1:
|
||||
start_demo = st.button("Start Demo Stream")
|
||||
with col2:
|
||||
stop_demo = st.button("Stop Demo Stream")
|
||||
if start_demo:
|
||||
# Always stop any running stream for clean state
|
||||
try:
|
||||
requests.post(f"{BACKEND_URL}/stop_audio").json()
|
||||
except Exception:
|
||||
pass
|
||||
time.sleep(1)
|
||||
demo_cfg = demo_stream_map[demo_selected]
|
||||
# Octets per frame logic matches quality_map
|
||||
q = QUALITY_MAP[demo_cfg['quality']]
|
||||
|
||||
# Language configs and test files
|
||||
lang_cfgs = [
|
||||
(auracast_config.AuracastBigConfigDeu, 'de'),
|
||||
(auracast_config.AuracastBigConfigEng, 'en'),
|
||||
(auracast_config.AuracastBigConfigFra, 'fr'),
|
||||
(auracast_config.AuracastBigConfigSpa, 'es'),
|
||||
(auracast_config.AuracastBigConfigIta, 'it'),
|
||||
(auracast_config.AuracastBigConfigPol, 'pl'),
|
||||
]
|
||||
bigs1 = []
|
||||
for i in range(demo_cfg['streams']):
|
||||
cfg_cls, lang = lang_cfgs[i % len(lang_cfgs)]
|
||||
bigs1.append(cfg_cls(
|
||||
audio_source=f'file:../testdata/wave_particle_5min_{lang}_{int(q["rate"]/1000)}kHz_mono.wav',
|
||||
iso_que_len=32,
|
||||
sampling_frequency=q['rate'],
|
||||
octets_per_frame=q['octets'],
|
||||
))
|
||||
|
||||
# Split bigs into two configs if needed
|
||||
max_per_mc = {48000: 1, 24000: 2, 16000: 3}
|
||||
max_streams = max_per_mc.get(q['rate'], 3)
|
||||
bigs2 = []
|
||||
if len(bigs1) > max_streams:
|
||||
bigs2 = bigs1[max_streams:]
|
||||
bigs1 = bigs1[:max_streams]
|
||||
config1 = auracast_config.AuracastConfigGroup(
|
||||
auracast_sampling_rate_hz=q['rate'],
|
||||
octets_per_frame=q['octets'],
|
||||
transport=TRANSPORT1,
|
||||
bigs=bigs1
|
||||
)
|
||||
config2 = None
|
||||
if bigs2:
|
||||
config2 = auracast_config.AuracastConfigGroup(
|
||||
auracast_sampling_rate_hz=q['rate'],
|
||||
octets_per_frame=q['octets'],
|
||||
transport=TRANSPORT2,
|
||||
bigs=bigs2
|
||||
)
|
||||
# Call /init and /init2
|
||||
try:
|
||||
r1 = requests.post(f"{BACKEND_URL}/init", json=config1.model_dump())
|
||||
if r1.status_code == 200:
|
||||
msg = f"Demo stream started on multicaster 1 ({len(bigs1)} streams)"
|
||||
st.session_state['demo_stream_started'] = True
|
||||
st.success(msg)
|
||||
else:
|
||||
st.session_state['demo_stream_started'] = False
|
||||
st.error(f"Failed to initialize multicaster 1: {r1.text}")
|
||||
if config2:
|
||||
r2 = requests.post(f"{BACKEND_URL}/init2", json=config2.model_dump())
|
||||
if r2.status_code == 200:
|
||||
st.success(f"Demo stream started on multicaster 2 ({len(bigs2)} streams)")
|
||||
else:
|
||||
st.error(f"Failed to initialize multicaster 2: {r2.text}")
|
||||
except Exception as e:
|
||||
st.session_state['demo_stream_started'] = False
|
||||
st.error(f"Error: {e}")
|
||||
elif stop_demo:
|
||||
try:
|
||||
r = requests.post(f"{BACKEND_URL}/stop_audio").json()
|
||||
st.session_state['demo_stream_started'] = False
|
||||
if r.get('was_running'):
|
||||
st.info("Demo stream stopped.")
|
||||
else:
|
||||
st.info("Demo stream was not running.")
|
||||
except Exception as e:
|
||||
st.error(f"Error: {e}")
|
||||
elif st.session_state['demo_stream_started']:
|
||||
st.success(f"Demo stream running: {demo_selected}")
|
||||
else:
|
||||
st.info("Demo stream not running.")
|
||||
quality = None # Not used in demo mode
|
||||
else:
|
||||
# Stream quality selection (now enabled)
|
||||
|
||||
quality_options = list(QUALITY_MAP.keys())
|
||||
default_quality = "Medium (24kHz)" if "Medium (24kHz)" in quality_options else quality_options[0]
|
||||
quality = st.selectbox(
|
||||
"Stream Quality (Sampling Rate)",
|
||||
quality_options,
|
||||
index=quality_options.index(default_quality),
|
||||
help="Select the audio sampling rate for the stream. Lower rates may improve compatibility."
|
||||
)
|
||||
default_name = saved_settings.get('channel_names', ["Broadcast0"])[0]
|
||||
default_lang = saved_settings.get('languages', ["deu"])[0]
|
||||
default_input = saved_settings.get('input_device') or 'default'
|
||||
stream_name = st.text_input(
|
||||
"Channel Name",
|
||||
value=default_name,
|
||||
help="The primary name for your broadcast. Like the SSID of a WLAN, it identifies your stream for receivers."
|
||||
)
|
||||
raw_program_info = saved_settings.get('program_info', default_name)
|
||||
if isinstance(raw_program_info, list) and raw_program_info:
|
||||
default_program_info = raw_program_info[0]
|
||||
else:
|
||||
default_program_info = raw_program_info
|
||||
program_info = st.text_input(
|
||||
"Program Info",
|
||||
value=default_program_info,
|
||||
help="Additional details about the broadcast program, such as its content or purpose. Shown to receivers for more context."
|
||||
)
|
||||
language = st.text_input(
|
||||
"Language (ISO 639-3)",
|
||||
value=default_lang,
|
||||
help="Three-letter language code (e.g., 'eng' for English, 'deu' for German). Used by receivers to display the language of the stream. See: https://en.wikipedia.org/wiki/List_of_ISO_639-3_codes"
|
||||
)
|
||||
# Gain slider for Webapp mode
|
||||
if audio_mode == "Webapp":
|
||||
mic_gain = st.slider("Microphone Gain", 0.0, 2.0, 1.0, 0.1, help="Adjust microphone volume sent to Auracast")
|
||||
else:
|
||||
mic_gain = 1.0
|
||||
|
||||
# Input device selection for USB mode
|
||||
if audio_mode == "USB/Network":
|
||||
resp = requests.get(f"{BACKEND_URL}/audio_inputs")
|
||||
device_list = resp.json().get('inputs', [])
|
||||
# Display "name [id]" but use name as value
|
||||
input_options = [f"{d['name']} [{d['id']}]" for d in device_list]
|
||||
option_name_map = {f"{d['name']} [{d['id']}]": d['name'] for d in device_list}
|
||||
device_names = [d['name'] for d in device_list]
|
||||
|
||||
# Determine default input by name
|
||||
default_input_name = saved_settings.get('input_device')
|
||||
if default_input_name not in device_names and device_names:
|
||||
default_input_name = device_names[0]
|
||||
default_input_label = None
|
||||
for label, name in option_name_map.items():
|
||||
if name == default_input_name:
|
||||
default_input_label = label
|
||||
break
|
||||
if not input_options:
|
||||
st.warning("No hardware audio input devices found. Plug in a USB input device and click Refresh.")
|
||||
if st.button("Refresh"):
|
||||
try:
|
||||
requests.post(f"{BACKEND_URL}/refresh_audio_inputs", timeout=3)
|
||||
except Exception as e:
|
||||
st.error(f"Failed to refresh devices: {e}")
|
||||
st.rerun()
|
||||
input_device = None
|
||||
else:
|
||||
col1, col2 = st.columns([3, 1], vertical_alignment="bottom")
|
||||
with col1:
|
||||
selected_option = st.selectbox(
|
||||
"Input Device",
|
||||
input_options,
|
||||
index=input_options.index(default_input_label) if default_input_label in input_options else 0
|
||||
)
|
||||
with col2:
|
||||
if st.button("Refresh"):
|
||||
try:
|
||||
requests.post(f"{BACKEND_URL}/refresh_audio_inputs", timeout=3)
|
||||
except Exception as e:
|
||||
st.error(f"Failed to refresh devices: {e}")
|
||||
st.rerun()
|
||||
# Send only the device name to backend
|
||||
input_device = option_name_map[selected_option] if selected_option in option_name_map else None
|
||||
else:
|
||||
input_device = None
|
||||
|
||||
start_stream = st.button("Start Auracast")
|
||||
stop_stream = st.button("Stop Auracast")
|
||||
|
||||
# If gain slider moved while streaming, send update to JS without restarting
|
||||
if audio_mode == "Webapp" and st.session_state.get('stream_started'):
|
||||
update_js = f"""
|
||||
<script>
|
||||
if (window.gainNode) {{ window.gainNode.gain.value = {mic_gain}; }}
|
||||
</script>
|
||||
"""
|
||||
st.components.v1.html(update_js, height=0)
|
||||
|
||||
if stop_stream:
|
||||
st.session_state['stream_started'] = False
|
||||
try:
|
||||
r = requests.post(f"{BACKEND_URL}/stop_audio").json()
|
||||
if r['was_running']:
|
||||
st.success("Stream Stopped!")
|
||||
else:
|
||||
st.success("Stream was not running.")
|
||||
except Exception as e:
|
||||
st.error(f"Error: {e}")
|
||||
# Ensure existing WebRTC connection is fully closed so that a fresh
|
||||
# connection is created the next time we start the stream.
|
||||
if audio_mode == "Webapp":
|
||||
cleanup_js = """
|
||||
<script>
|
||||
if (window.webrtc_pc) {
|
||||
window.webrtc_pc.getSenders().forEach(s => s.track.stop());
|
||||
window.webrtc_pc.close();
|
||||
window.webrtc_pc = null;
|
||||
}
|
||||
window.webrtc_started = false;
|
||||
</script>
|
||||
"""
|
||||
st.components.v1.html(cleanup_js, height=0)
|
||||
|
||||
if start_stream:
|
||||
# Always send stop to ensure backend is in a clean state, regardless of current status
|
||||
r = requests.post(f"{BACKEND_URL}/stop_audio").json()
|
||||
if r['was_running']:
|
||||
st.success("Stream Stopped!")
|
||||
|
||||
# Small pause lets backend fully release audio devices before re-init
|
||||
time.sleep(1)
|
||||
# Prepare config using the model (do NOT send qos_config, only relevant fields)
|
||||
q = QUALITY_MAP[quality]
|
||||
config = auracast_config.AuracastConfigGroup(
|
||||
auracast_sampling_rate_hz=q['rate'],
|
||||
octets_per_frame=q['octets'],
|
||||
transport=TRANSPORT1, # transport for raspberry pi gpio header
|
||||
bigs = [
|
||||
auracast_config.AuracastBigConfig(
|
||||
name=stream_name,
|
||||
program_info=program_info,
|
||||
language=language,
|
||||
audio_source=(
|
||||
f"device:{input_device}" if audio_mode == "USB/Network" else (
|
||||
"webrtc" if audio_mode == "Webapp" else "network"
|
||||
)
|
||||
),
|
||||
input_format=(f"int16le,{q['rate']},1" if audio_mode == "USB/Network" else "auto"),
|
||||
iso_que_len=1,
|
||||
sampling_frequency=q['rate'],
|
||||
octets_per_frame=q['octets'],
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
try:
|
||||
r = requests.post(f"{BACKEND_URL}/init", json=config.model_dump())
|
||||
if r.status_code == 200:
|
||||
st.success("Stream Started!")
|
||||
else:
|
||||
st.error(f"Failed to initialize: {r.text}")
|
||||
except Exception as e:
|
||||
st.error(f"Error: {e}")
|
||||
|
||||
# Render / maintain WebRTC component
|
||||
if audio_mode == "Webapp" and (start_stream or st.session_state.get('stream_started')):
|
||||
st.markdown("Starting microphone; allow access if prompted and speak.")
|
||||
component = f"""
|
||||
<script>
|
||||
(async () => {{
|
||||
// Clean up any previous WebRTC connection before starting a new one
|
||||
if (window.webrtc_pc) {{
|
||||
window.webrtc_pc.getSenders().forEach(s => s.track.stop());
|
||||
window.webrtc_pc.close();
|
||||
}}
|
||||
const GAIN_VALUE = {mic_gain};
|
||||
const pc = new RTCPeerConnection(); // No STUN needed for localhost
|
||||
window.webrtc_pc = pc;
|
||||
window.webrtc_started = true;
|
||||
const micStream = await navigator.mediaDevices.getUserMedia({{audio:true}});
|
||||
// Create Web Audio gain processing
|
||||
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
|
||||
const source = audioCtx.createMediaStreamSource(micStream);
|
||||
const gainNode = audioCtx.createGain();
|
||||
gainNode.gain.value = GAIN_VALUE;
|
||||
// Expose for later adjustments
|
||||
window.gainNode = gainNode;
|
||||
const dest = audioCtx.createMediaStreamDestination();
|
||||
source.connect(gainNode).connect(dest);
|
||||
// Add processed tracks to WebRTC
|
||||
dest.stream.getTracks().forEach(t => pc.addTrack(t, dest.stream));
|
||||
// --- WebRTC offer/answer exchange ---
|
||||
const offer = await pc.createOffer();
|
||||
// Patch SDP offer to include a=ptime using global PTIME
|
||||
let sdp = offer.sdp;
|
||||
const ptime_line = 'a=ptime:{PTIME}';
|
||||
const maxptime_line = 'a=maxptime:{PTIME}';
|
||||
if (sdp.includes('a=sendrecv')) {{
|
||||
sdp = sdp.replace('a=sendrecv', 'a=sendrecv\\n' + ptime_line + '\\n' + maxptime_line);
|
||||
}} else {{
|
||||
sdp += '\\n' + ptime_line + '\\n' + maxptime_line;
|
||||
}}
|
||||
const patched_offer = new RTCSessionDescription({{sdp, type: offer.type}});
|
||||
await pc.setLocalDescription(patched_offer);
|
||||
// Send offer to backend
|
||||
const response = await fetch(
|
||||
"{BACKEND_URL}/offer",
|
||||
{{
|
||||
method: 'POST',
|
||||
headers: {{'Content-Type':'application/json'}},
|
||||
body: JSON.stringify({{sdp: pc.localDescription.sdp, type: pc.localDescription.type}})
|
||||
}}
|
||||
);
|
||||
const answer = await response.json();
|
||||
await pc.setRemoteDescription(new RTCSessionDescription({{sdp: answer.sdp, type: answer.type}}));
|
||||
}})();
|
||||
</script>
|
||||
"""
|
||||
st.components.v1.html(component, height=0)
|
||||
st.session_state['stream_started'] = True
|
||||
#else:
|
||||
# st.header("Advertised Streams (Cloud Announcements)")
|
||||
# st.info("This feature requires backend support to list advertised streams.")
|
||||
# Placeholder for future implementation
|
||||
# Example: r = requests.get(f"{BACKEND_URL}/advertised_streams")
|
||||
# if r.status_code == 200:
|
||||
# streams = r.json()
|
||||
# for s in streams:
|
||||
# st.write(s)
|
||||
# else:
|
||||
# st.error("Could not fetch advertised streams.")
|
||||
|
||||
log.basicConfig(
|
||||
level=os.environ.get('LOG_LEVEL', log.DEBUG),
|
||||
format='%(module)s.py:%(lineno)d %(levelname)s: %(message)s'
|
||||
)
|
||||
421
src/auracast/server/multicast_server.py
Normal file
421
src/auracast/server/multicast_server.py
Normal file
@@ -0,0 +1,421 @@
|
||||
import glob
|
||||
import os
|
||||
import logging as log
|
||||
import uuid
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime
|
||||
import asyncio
|
||||
import numpy as np
|
||||
from pydantic import BaseModel
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from auracast import multicast_control, auracast_config
|
||||
from aiortc import RTCPeerConnection, RTCSessionDescription, MediaStreamTrack
|
||||
import av
|
||||
import av.audio.layout
|
||||
import sounddevice as sd # type: ignore
|
||||
from typing import Set, List, Dict, Any
|
||||
import traceback
|
||||
|
||||
|
||||
PTIME = 40 # TODO: seems to have no effect at all
|
||||
pcs: Set[RTCPeerConnection] = set() # keep refs so they don’t GC early
|
||||
AUDIO_INPUT_DEVICES_CACHE: List[Dict[str, Any]] = []
|
||||
|
||||
class Offer(BaseModel):
|
||||
sdp: str
|
||||
type: str
|
||||
|
||||
def get_device_index_by_name(name: str):
|
||||
"""Return the device index for a given device name, or None if not found."""
|
||||
for d in AUDIO_INPUT_DEVICES_CACHE:
|
||||
if d["name"] == name:
|
||||
return d["id"]
|
||||
return None
|
||||
|
||||
|
||||
# Path to persist stream settings
|
||||
STREAM_SETTINGS_FILE = os.path.join(os.path.dirname(__file__), 'stream_settings.json')
|
||||
|
||||
def load_stream_settings() -> dict:
|
||||
"""Load persisted stream settings if available."""
|
||||
if os.path.exists(STREAM_SETTINGS_FILE):
|
||||
try:
|
||||
with open(STREAM_SETTINGS_FILE, 'r', encoding='utf-8') as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {}
|
||||
return {}
|
||||
|
||||
def save_stream_settings(settings: dict):
|
||||
"""Save stream settings to disk."""
|
||||
try:
|
||||
with open(STREAM_SETTINGS_FILE, 'w', encoding='utf-8') as f:
|
||||
json.dump(settings, f, indent=2)
|
||||
except Exception as e:
|
||||
log.error('Unable to persist stream settings: %s', e)
|
||||
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# Allow CORS for frontend on localhost
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # You can restrict this to ["http://localhost:8501"] if you want
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Initialize global configuration
|
||||
global_config_group = auracast_config.AuracastConfigGroup()
|
||||
|
||||
# Create multicast controller
|
||||
multicaster1: multicast_control.Multicaster | None = None
|
||||
multicaster2: multicast_control.Multicaster | None = None
|
||||
|
||||
@app.post("/init")
|
||||
async def initialize(conf: auracast_config.AuracastConfigGroup):
|
||||
"""Initializes the primary broadcaster (multicaster1)."""
|
||||
global global_config_group
|
||||
global multicaster1
|
||||
try:
|
||||
if conf.transport == 'auto':
|
||||
serial_devices = glob.glob('/dev/serial/by-id/*')
|
||||
log.info('Found serial devices: %s', serial_devices)
|
||||
for device in serial_devices:
|
||||
if 'usb-ZEPHYR_Zephyr_HCI_UART_sample' in device:
|
||||
log.info('Using: %s', device)
|
||||
conf.transport = f'serial:{device},115200,rtscts'
|
||||
break
|
||||
if conf.transport == 'auto':
|
||||
raise HTTPException(status_code=500, detail='No suitable transport found.')
|
||||
# Derive audio_mode and input_device from first BIG audio_source
|
||||
first_source = conf.bigs[0].audio_source if conf.bigs else ''
|
||||
if first_source.startswith('device:'):
|
||||
audio_mode_persist = 'USB'
|
||||
input_device_name = first_source.split(':', 1)[1] if ':' in first_source else None
|
||||
# Map device name to current index for use with sounddevice
|
||||
device_index = get_device_index_by_name(input_device_name) if input_device_name else None
|
||||
# Patch config to use index for sounddevice (but persist name)
|
||||
if device_index is not None:
|
||||
for big in conf.bigs:
|
||||
if big.audio_source.startswith('device:'):
|
||||
big.audio_source = f'device:{device_index}'
|
||||
else:
|
||||
log.error(f"Device name '{input_device_name}' not found in current device list.")
|
||||
raise HTTPException(status_code=400, detail=f"Audio device '{input_device_name}' not found.")
|
||||
elif first_source == 'webrtc':
|
||||
audio_mode_persist = 'Webapp'
|
||||
input_device_name = None
|
||||
elif first_source.startswith('file:'):
|
||||
audio_mode_persist = 'Demo'
|
||||
input_device_name = None
|
||||
else:
|
||||
audio_mode_persist = 'Network'
|
||||
input_device_name = None
|
||||
save_stream_settings({
|
||||
'channel_names': [big.name for big in conf.bigs],
|
||||
'languages': [big.language for big in conf.bigs],
|
||||
'audio_mode': audio_mode_persist,
|
||||
'input_device': input_device_name,
|
||||
'program_info': [getattr(big, 'program_info', None) for big in conf.bigs],
|
||||
'gain': [getattr(big, 'input_gain', 1.0) for big in conf.bigs],
|
||||
'timestamp': datetime.utcnow().isoformat()
|
||||
})
|
||||
global_config_group = conf
|
||||
if multicaster1 is not None:
|
||||
try:
|
||||
await multicaster1.shutdown()
|
||||
except Exception:
|
||||
log.warning("Failed to shutdown previous multicaster", exc_info=True)
|
||||
log.info('Initializing multicaster1 with config:\n %s', conf.model_dump_json(indent=2))
|
||||
multicaster1 = multicast_control.Multicaster(conf, conf.bigs)
|
||||
await multicaster1.init_broadcast()
|
||||
if any(big.audio_source.startswith("device:") or big.audio_source.startswith("file:") for big in conf.bigs):
|
||||
log.info("Auto-starting streaming on multicaster1")
|
||||
await multicaster1.start_streaming()
|
||||
except Exception as e:
|
||||
log.error("Exception in /init: %s", traceback.format_exc())
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
@app.post("/init2")
|
||||
async def initialize2(conf: auracast_config.AuracastConfigGroup):
|
||||
"""Initializes the secondary broadcaster (multicaster2). Does NOT persist stream settings."""
|
||||
global multicaster2
|
||||
try:
|
||||
if conf.transport == 'auto':
|
||||
serial_devices = glob.glob('/dev/serial/by-id/*')
|
||||
log.info('Found serial devices: %s', serial_devices)
|
||||
for device in serial_devices:
|
||||
if 'usb-ZEPHYR_Zephyr_HCI_UART_sample' in device:
|
||||
log.info('Using: %s', device)
|
||||
conf.transport = f'serial:{device},115200,rtscts'
|
||||
break
|
||||
if conf.transport == 'auto':
|
||||
raise HTTPException(status_code=500, detail='No suitable transport found.')
|
||||
# Patch device name to index for sounddevice
|
||||
for big in conf.bigs:
|
||||
if big.audio_source.startswith('device:'):
|
||||
device_name = big.audio_source.split(':', 1)[1]
|
||||
device_index = get_device_index_by_name(device_name)
|
||||
if device_index is not None:
|
||||
big.audio_source = f'device:{device_index}'
|
||||
else:
|
||||
log.error(f"Device name '{device_name}' not found in current device list.")
|
||||
raise HTTPException(status_code=400, detail=f"Audio device '{device_name}' not found.")
|
||||
log.info('Initializing multicaster2 with config:\n %s', conf.model_dump_json(indent=2))
|
||||
multicaster2 = multicast_control.Multicaster(conf, conf.bigs)
|
||||
await multicaster2.init_broadcast()
|
||||
if any(big.audio_source.startswith("device:") or big.audio_source.startswith("file:") for big in conf.bigs):
|
||||
log.info("Auto-starting streaming on multicaster2")
|
||||
await multicaster2.start_streaming()
|
||||
except Exception as e:
|
||||
log.error("Exception in /init2: %s", traceback.format_exc())
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@app.post("/stream_lc3")
|
||||
async def send_audio(audio_data: dict[str, str]):
|
||||
"""Sends a block of pre-coded LC3 audio."""
|
||||
if multicaster1 is None:
|
||||
raise HTTPException(status_code=500, detail='Auracast endpoint was never intialized')
|
||||
try:
|
||||
for big in global_config_group.bigs:
|
||||
assert big.language in audio_data, HTTPException(status_code=500, detail='language len missmatch')
|
||||
log.info('Received a send audio request for %s', big.language)
|
||||
big.audio_source = audio_data[big.language].encode('latin-1') # TODO: use base64 encoding
|
||||
|
||||
multicaster1.big_conf = global_config_group.bigs
|
||||
await multicaster1.start_streaming()
|
||||
return {"status": "audio_sent"}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@app.post("/stop_audio")
|
||||
async def stop_audio():
|
||||
"""Stops streaming on both multicaster1 and multicaster2."""
|
||||
try:
|
||||
# First close any active WebRTC peer connections so their track loops finish cleanly
|
||||
close_tasks = [pc.close() for pc in list(pcs)]
|
||||
pcs.clear()
|
||||
if close_tasks:
|
||||
await asyncio.gather(*close_tasks, return_exceptions=True)
|
||||
|
||||
# Now shut down both multicasters and release audio devices
|
||||
running = False
|
||||
if multicaster1 is not None:
|
||||
await multicaster1.stop_streaming()
|
||||
await multicaster1.reset() # Fully reset controller and advertising
|
||||
running = True
|
||||
if multicaster2 is not None:
|
||||
await multicaster2.stop_streaming()
|
||||
await multicaster2.reset() # Fully reset controller and advertising
|
||||
running = True
|
||||
|
||||
return {"status": "stopped", "was_running": running}
|
||||
except Exception as e:
|
||||
log.error("Exception in /stop_audio: %s", traceback.format_exc())
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@app.get("/status")
|
||||
async def get_status():
|
||||
"""Gets the current status of the multicaster together with persisted stream info."""
|
||||
status = multicaster1.get_status() if multicaster1 else {
|
||||
'is_initialized': False,
|
||||
'is_streaming': False,
|
||||
}
|
||||
status.update(load_stream_settings())
|
||||
return status
|
||||
|
||||
|
||||
async def scan_audio_devices():
|
||||
"""Scans for available audio devices and updates the cache."""
|
||||
global AUDIO_INPUT_DEVICES_CACHE
|
||||
log.info("Scanning for audio input devices...")
|
||||
try:
|
||||
if sys.platform == 'linux':
|
||||
log.info("Re-initializing sounddevice to scan for new devices")
|
||||
sd._terminate()
|
||||
sd._initialize()
|
||||
|
||||
devs = sd.query_devices()
|
||||
inputs = [
|
||||
dict(d, id=idx)
|
||||
for idx, d in enumerate(devs)
|
||||
if d.get("max_input_channels", 0) > 0
|
||||
]
|
||||
log.info('Found %d audio input devices: %s', len(inputs), inputs)
|
||||
AUDIO_INPUT_DEVICES_CACHE = inputs
|
||||
except Exception:
|
||||
log.error("Exception while scanning audio devices:", exc_info=True)
|
||||
# Do not clear cache on error, keep the last known good list
|
||||
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event():
|
||||
"""Pre-scans audio devices on startup."""
|
||||
await scan_audio_devices()
|
||||
|
||||
|
||||
@app.get("/audio_inputs")
|
||||
async def list_audio_inputs():
|
||||
"""Return available hardware audio input devices from cache (by name, for selection)."""
|
||||
# Only expose name and id for frontend
|
||||
return {"inputs": AUDIO_INPUT_DEVICES_CACHE}
|
||||
|
||||
|
||||
@app.post("/refresh_audio_inputs")
|
||||
async def refresh_audio_inputs():
|
||||
"""Triggers a re-scan of audio devices."""
|
||||
await scan_audio_devices()
|
||||
return {"status": "ok", "inputs": AUDIO_INPUT_DEVICES_CACHE}
|
||||
|
||||
|
||||
@app.post("/offer")
|
||||
async def offer(offer: Offer):
|
||||
log.info("/offer endpoint called")
|
||||
|
||||
# If a previous PeerConnection is still alive, close it so we only ever keep one active.
|
||||
if pcs:
|
||||
log.info("Closing %d existing PeerConnection(s) before creating a new one", len(pcs))
|
||||
close_tasks = [p.close() for p in list(pcs)]
|
||||
await asyncio.gather(*close_tasks, return_exceptions=True)
|
||||
pcs.clear()
|
||||
|
||||
pc = RTCPeerConnection() # No STUN needed for localhost
|
||||
pcs.add(pc)
|
||||
id_ = uuid.uuid4().hex[:8]
|
||||
log.info(f"{id_}: new PeerConnection")
|
||||
|
||||
# create directory for records - only for testing
|
||||
os.makedirs("./records", exist_ok=True)
|
||||
|
||||
# Do NOT start the streamer yet – we'll start it lazily once we actually
|
||||
# receive the first audio frame, ensuring WebRTCAudioInput is ready and
|
||||
# avoiding race-conditions on restarts.
|
||||
@pc.on("track")
|
||||
async def on_track(track: MediaStreamTrack):
|
||||
log.info(f"{id_}: track {track.kind} received")
|
||||
try:
|
||||
first = True
|
||||
while True:
|
||||
frame: av.audio.frame.AudioFrame = await track.recv() # RTP audio frame (already decrypted)
|
||||
if first:
|
||||
log.info(f"{id_}: frame layout={frame.layout}")
|
||||
log.info(f"{id_}: frame format={frame.format}")
|
||||
log.info(
|
||||
f"{id_}: frame sample_rate={frame.sample_rate}, samples_per_channel={frame.samples}, planes={frame.planes}"
|
||||
)
|
||||
# Lazily start the streamer now that we know a track exists.
|
||||
if multicaster1.streamer is None:
|
||||
await multicaster1.start_streaming()
|
||||
# Yield control so the Streamer coroutine has a chance to
|
||||
# create the WebRTCAudioInput before we push samples.
|
||||
await asyncio.sleep(0)
|
||||
first = False
|
||||
# in stereo case this is interleaved data format
|
||||
frame_array = frame.to_ndarray()
|
||||
log.info(f"array.shape{frame_array.shape}")
|
||||
log.info(f"array.dtype{frame_array.dtype}")
|
||||
log.info(f"frame.to_ndarray(){frame_array}")
|
||||
|
||||
samples = frame_array.reshape(-1)
|
||||
log.info(f"samples.shape: {samples.shape}")
|
||||
|
||||
if frame.layout.name == 'stereo':
|
||||
# Interleaved stereo: [L0, R0, L1, R1, ...]
|
||||
mono_array = samples[::2] # Take left channel
|
||||
else:
|
||||
mono_array = samples
|
||||
|
||||
log.info(f"mono_array.shape: {mono_array.shape}")
|
||||
|
||||
|
||||
frame_array = frame.to_ndarray()
|
||||
|
||||
# Flatten in case it's (1, N) or (N,)
|
||||
samples = frame_array.reshape(-1)
|
||||
|
||||
if frame.layout.name == 'stereo':
|
||||
# Interleaved stereo: [L0, R0, L1, R1, ...]
|
||||
mono_array = samples[::2] # Take left channel
|
||||
else:
|
||||
mono_array = samples
|
||||
|
||||
# Get current WebRTC audio input (streamer may have been restarted)
|
||||
big0 = list(multicaster1.bigs.values())[0]
|
||||
audio_input = big0.get('audio_input')
|
||||
# Wait until the streamer has instantiated the WebRTCAudioInput
|
||||
if audio_input is None or getattr(audio_input, 'closed', False):
|
||||
continue
|
||||
# Feed mono PCM samples to the global WebRTC audio input
|
||||
await audio_input.put_samples(mono_array.astype(np.int16))
|
||||
|
||||
# Save to WAV file - only for testing
|
||||
# if not hasattr(pc, 'wav_writer'):
|
||||
# import wave
|
||||
# wav_path = f"./records/auracast_{id_}.wav"
|
||||
# pc.wav_writer = wave.open(wav_path, "wb")
|
||||
# pc.wav_writer.setnchannels(1) # mono
|
||||
# pc.wav_writer.setsampwidth(2) # 16-bit PCM
|
||||
# pc.wav_writer.setframerate(frame.sample_rate)
|
||||
|
||||
# pcm_data = mono_array.astype(np.int16).tobytes()
|
||||
# pc.wav_writer.writeframes(pcm_data)
|
||||
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"{id_}: Exception in on_track: {e}")
|
||||
finally:
|
||||
# Always close the wav file when the track ends or on error
|
||||
if hasattr(pc, 'wav_writer'):
|
||||
try:
|
||||
pc.wav_writer.close()
|
||||
except Exception:
|
||||
pass
|
||||
del pc.wav_writer
|
||||
|
||||
# --- SDP negotiation ---
|
||||
log.info(f"{id_}: setting remote description")
|
||||
await pc.setRemoteDescription(RTCSessionDescription(**offer.model_dump()))
|
||||
|
||||
log.info(f"{id_}: creating answer")
|
||||
answer = await pc.createAnswer()
|
||||
sdp = answer.sdp
|
||||
# Insert a=ptime using the global PTIME variable
|
||||
ptime_line = f"a=ptime:{PTIME}"
|
||||
if "a=sendrecv" in sdp:
|
||||
sdp = sdp.replace("a=sendrecv", f"a=sendrecv\n{ptime_line}")
|
||||
else:
|
||||
sdp += f"\n{ptime_line}"
|
||||
new_answer = RTCSessionDescription(sdp=sdp, type=answer.type)
|
||||
await pc.setLocalDescription(new_answer)
|
||||
log.info(f"{id_}: sending answer with {ptime_line}")
|
||||
return {"sdp": pc.localDescription.sdp,
|
||||
"type": pc.localDescription.type}
|
||||
|
||||
|
||||
@app.post("/shutdown")
|
||||
async def shutdown():
|
||||
"""Stops broadcasting and releases all audio/Bluetooth resources."""
|
||||
try:
|
||||
await multicaster1.shutdown()
|
||||
return {"status": "stopped"}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import os
|
||||
os.chdir(os.path.dirname(__file__))
|
||||
import uvicorn
|
||||
log.basicConfig( # for debug log level export LOG_LEVEL=DEBUG
|
||||
level=os.environ.get('LOG_LEVEL', log.INFO),
|
||||
format='%(module)s.py:%(lineno)d %(levelname)s: %(message)s'
|
||||
)
|
||||
# Bind to localhost only for security: prevents network access, only frontend on same machine can connect
|
||||
uvicorn.run(app, host="127.0.0.1", port=5000)
|
||||
89
src/auracast/server/provision_domain_hostname.sh
Normal file
89
src/auracast/server/provision_domain_hostname.sh
Normal file
@@ -0,0 +1,89 @@
|
||||
#!/bin/bash
|
||||
# change_domain_hostname.sh
|
||||
# Safely change the system hostname and Avahi mDNS domain name, update /etc/hosts, restart Avahi,
|
||||
# and generate a per-device certificate signed by the CA.
|
||||
# Usage: sudo ./change_domain_hostname.sh <new_hostname> <new_domain> [--force]
|
||||
|
||||
set -e
|
||||
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Please run as root."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: sudo $0 <new_hostname> <new_domain> [--force]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
NEW_HOSTNAME="$1"
|
||||
NEW_DOMAIN="$2"
|
||||
FORCE=0
|
||||
if [ "$3" == "--force" ]; then
|
||||
FORCE=1
|
||||
fi
|
||||
|
||||
# Validate hostname: single label, no dots
|
||||
if [[ "$NEW_HOSTNAME" == *.* ]]; then
|
||||
echo "ERROR: Hostname must not contain dots."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Set system hostname
|
||||
hostnamectl set-hostname "$NEW_HOSTNAME"
|
||||
echo "/etc/hostname set to $NEW_HOSTNAME."
|
||||
|
||||
# Update /etc/hosts
|
||||
if grep -q '^127.0.1.1' /etc/hosts; then
|
||||
sed -i "s/^127.0.1.1.*/127.0.1.1 $NEW_HOSTNAME/" /etc/hosts
|
||||
else
|
||||
echo "127.0.1.1 $NEW_HOSTNAME" >> /etc/hosts
|
||||
fi
|
||||
echo "/etc/hosts updated."
|
||||
|
||||
# Set Avahi domain name
|
||||
AVAHI_CONF="/etc/avahi/avahi-daemon.conf"
|
||||
sed -i "/^\[server\]/,/^\s*\[/{s/^\s*domain-name\s*=.*/domain-name=$NEW_DOMAIN/}" "$AVAHI_CONF"
|
||||
echo "Set Avahi domain name to $NEW_DOMAIN."
|
||||
|
||||
# Restart Avahi
|
||||
echo "Restarting avahi-daemon..."
|
||||
systemctl restart avahi-daemon
|
||||
|
||||
echo "Done. Hostname: $NEW_HOSTNAME, Avahi domain: $NEW_DOMAIN"
|
||||
|
||||
# --- Per-device certificate logic ---
|
||||
CA_DIR="$(dirname "$0")/certs/ca"
|
||||
PER_DEVICE_DIR="$(dirname "$0")/certs/per_device/$NEW_HOSTNAME.$NEW_DOMAIN"
|
||||
mkdir -p "$PER_DEVICE_DIR"
|
||||
CA_CERT="$CA_DIR/ca_cert.pem"
|
||||
CA_KEY="$CA_DIR/ca_key.pem"
|
||||
DEVICE_CERT="$PER_DEVICE_DIR/device_cert.pem"
|
||||
DEVICE_KEY="$PER_DEVICE_DIR/device_key.pem"
|
||||
DEVICE_CSR="$PER_DEVICE_DIR/device.csr"
|
||||
SAN_CNF="$PER_DEVICE_DIR/san.cnf"
|
||||
|
||||
if [ -f "$DEVICE_CERT" ] && [ $FORCE -eq 0 ]; then
|
||||
echo "Per-device certificate already exists at $DEVICE_CERT. Use --force to regenerate."
|
||||
else
|
||||
echo "Generating per-device key/cert for $NEW_HOSTNAME.$NEW_DOMAIN..."
|
||||
openssl genrsa -out "$DEVICE_KEY" 4096
|
||||
cat > "$SAN_CNF" <<EOF
|
||||
[req]
|
||||
distinguished_name = req_distinguished_name
|
||||
req_extensions = v3_req
|
||||
prompt = no
|
||||
|
||||
[req_distinguished_name]
|
||||
CN = $NEW_HOSTNAME.$NEW_DOMAIN
|
||||
|
||||
[v3_req]
|
||||
subjectAltName = @alt_names
|
||||
|
||||
[alt_names]
|
||||
DNS.1 = $NEW_HOSTNAME.$NEW_DOMAIN
|
||||
EOF
|
||||
openssl req -new -key "$DEVICE_KEY" -out "$DEVICE_CSR" -config "$SAN_CNF"
|
||||
openssl x509 -req -in "$DEVICE_CSR" -CA "$CA_CERT" -CAkey "$CA_KEY" -CAcreateserial -out "$DEVICE_CERT" -days 7300 -extensions v3_req -extfile "$SAN_CNF"
|
||||
echo "Per-device certificate generated at $DEVICE_CERT."
|
||||
fi
|
||||
2
src/auracast/server/start_frontend_http.sh
Normal file
2
src/auracast/server/start_frontend_http.sh
Normal file
@@ -0,0 +1,2 @@
|
||||
# Start Streamlit HTTP server (port 8500)
|
||||
poetry run streamlit run multicast_frontend.py --server.port 8500 --server.enableCORS false --server.enableXsrfProtection false --server.headless true --browser.gatherUsageStats false
|
||||
37
src/auracast/server/start_frontend_https.sh
Executable file
37
src/auracast/server/start_frontend_https.sh
Executable file
@@ -0,0 +1,37 @@
|
||||
#!/bin/bash
|
||||
# to bind this to 443, root privileges are required. start this like
|
||||
# sudo -E PATH="$PATH" bash ./start_frontend_https.sh
|
||||
# Unified startup script: generates certs if needed, starts HTTPS Streamlit and HTTP->HTTPS redirector
|
||||
|
||||
# Dynamically select per-device cert and key based on hostname and Avahi domain
|
||||
DEVICE_HOSTNAME=$(hostname)
|
||||
AVAHI_CONF="/etc/avahi/avahi-daemon.conf"
|
||||
AVAHI_DOMAIN=$(awk -F= '/^\s*domain-name\s*=/{gsub(/ /, "", $2); print $2}' "$AVAHI_CONF")
|
||||
if [ -z "$AVAHI_DOMAIN" ]; then
|
||||
AVAHI_DOMAIN=local
|
||||
fi
|
||||
CERT_DIR="certs/per_device/${DEVICE_HOSTNAME}.${AVAHI_DOMAIN}"
|
||||
CERT="$CERT_DIR/device_cert.pem"
|
||||
KEY="$CERT_DIR/device_key.pem"
|
||||
CA_CERT="certs/ca/ca_cert.pem"
|
||||
|
||||
if [ ! -f "$CERT" ] || [ ! -f "$KEY" ]; then
|
||||
echo "ERROR: Device certificate or key not found in $CERT_DIR. Run provision_domain_hostname.sh first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$CA_CERT" ]; then
|
||||
echo "WARNING: CA certificate not found at $CA_CERT. HTTPS will work, but clients may not be able to import the CA."
|
||||
fi
|
||||
|
||||
echo "CA cert: $CA_CERT"
|
||||
echo "Device cert: $CERT"
|
||||
echo "Device key: $KEY"
|
||||
echo "Using hostname: $DEVICE_HOSTNAME"
|
||||
echo "Using Avahi domain: $AVAHI_DOMAIN"
|
||||
|
||||
# Path to poetry binary
|
||||
POETRY_BIN="/home/caster/.local/bin/poetry"
|
||||
|
||||
# Start Streamlit HTTPS server (port 443)
|
||||
$POETRY_BIN run streamlit run multicast_frontend.py --server.port 443 --server.enableCORS false --server.enableXsrfProtection false --server.headless true --server.sslCertFile "$CERT" --server.sslKeyFile "$KEY" --browser.gatherUsageStats false
|
||||
26
src/auracast/server/start_mdns.sh
Executable file
26
src/auracast/server/start_mdns.sh
Executable file
@@ -0,0 +1,26 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Script to advertise the local device via mDNS for an HTTPS service.
|
||||
# This allows other clients on the network to discover this device
|
||||
# using its mDNS hostname (e.g., your-hostname.local) on the specified port.
|
||||
|
||||
# Update: Advertise HTTPS service on port 443 (default)
|
||||
SERVICE_NAME="Auracast HTTPS Service" # You can customize this name
|
||||
SERVICE_TYPE="_https._tcp" # Standard type for HTTPS services
|
||||
SERVICE_PORT="443" # Port must match your HTTPS server (default 443)
|
||||
|
||||
echo "Starting mDNS advertisement..."
|
||||
echo "Command: avahi-publish-service -v \"$SERVICE_NAME\" \"$SERVICE_TYPE\" \"$SERVICE_PORT\""
|
||||
|
||||
avahi-publish-service -v "$SERVICE_NAME" "$SERVICE_TYPE" "$SERVICE_PORT"
|
||||
EXIT_STATUS=$?
|
||||
|
||||
# This part will be reached if avahi-publish-service exits.
|
||||
if [ $EXIT_STATUS -eq 0 ]; then
|
||||
echo "mDNS advertisement command finished with status 0."
|
||||
echo "This might indicate an issue connecting to the avahi-daemon or a configuration problem."
|
||||
echo "Please check for any messages above from avahi-publish-service itself."
|
||||
else
|
||||
echo "mDNS advertisement command exited with status $EXIT_STATUS."
|
||||
echo "This might be due to an error, or if you pressed Ctrl+C (which typically results in a non-zero status from signal termination)."
|
||||
fi
|
||||
BIN
src/auracast/testdata/wave_particle_5min_de.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_de.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_de_16kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_de_16kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_de_24kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_de_24kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_de_48kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_de_48kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_en.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_en.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_en_16kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_en_16kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_en_24kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_en_24kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_en_48kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_en_48kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_es.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_es.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_es_16kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_es_16kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_es_24kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_es_24kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_es_48kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_es_48kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_fr.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_fr.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_fr_16kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_fr_16kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_fr_24kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_fr_24kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_fr_48kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_fr_48kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_it.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_it.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_it_16kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_it_16kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_it_24kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_it_24kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_it_48kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_it_48kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_pl_16kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_pl_16kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_pl_24kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_pl_24kHz_mono.wav
vendored
Normal file
Binary file not shown.
BIN
src/auracast/testdata/wave_particle_5min_pl_48kHz_mono.wav
vendored
Normal file
BIN
src/auracast/testdata/wave_particle_5min_pl_48kHz_mono.wav
vendored
Normal file
Binary file not shown.
67
src/auracast/utils/network_audio_receiver.py
Normal file
67
src/auracast/utils/network_audio_receiver.py
Normal file
@@ -0,0 +1,67 @@
|
||||
import asyncio
|
||||
import socket
|
||||
import logging
|
||||
import numpy as np
|
||||
from typing import AsyncGenerator
|
||||
|
||||
class NetworkAudioReceiverUncoded:
|
||||
"""
|
||||
Receives PCM audio over UDP and provides an async generator interface for uncoded PCM frames.
|
||||
Combines network receiving and input logic for use with Auracast streamer.
|
||||
"""
|
||||
def __init__(self, port: int = 50007, samplerate: int = 16000, channels: int = 1, chunk_size: int = 1024):
|
||||
self.port = port
|
||||
self.samplerate = samplerate
|
||||
self.channels = channels
|
||||
self.chunk_size = chunk_size
|
||||
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
self.sock.bind(('0.0.0.0', self.port))
|
||||
self.sock.setblocking(False)
|
||||
self._running = False
|
||||
# Reduce queue size for lower latency (less buffering)
|
||||
self._queue = asyncio.Queue(maxsize=2) # Was 20
|
||||
|
||||
async def receive(self):
|
||||
self._running = True
|
||||
logging.info(f"NetworkAudioReceiver listening on UDP port {self.port}")
|
||||
try:
|
||||
while self._running:
|
||||
try:
|
||||
data, _ = await asyncio.get_event_loop().sock_recvfrom(self.sock, self.chunk_size * 2)
|
||||
await self._queue.put(data)
|
||||
except Exception:
|
||||
await asyncio.sleep(0.01)
|
||||
finally:
|
||||
self.sock.close()
|
||||
logging.info("NetworkAudioReceiver stopped.")
|
||||
|
||||
def stop(self):
|
||||
self._running = False
|
||||
|
||||
async def open(self):
|
||||
# Dummy PCM format object
|
||||
class PCMFormat:
|
||||
channels = self.channels
|
||||
sample_type = 'int16'
|
||||
sample_rate = self.samplerate
|
||||
return PCMFormat()
|
||||
|
||||
def rewind(self):
|
||||
pass # Not supported for live network input
|
||||
|
||||
async def frames(self, samples_per_frame: int) -> AsyncGenerator[np.ndarray, None]:
|
||||
bytes_per_frame = samples_per_frame * 2 * self.channels # 2 bytes for int16
|
||||
buf = bytearray()
|
||||
while True:
|
||||
data = await self._queue.get()
|
||||
# Optional: log queue size for latency debugging
|
||||
# logging.debug(f'NetworkAudioReceiver queue size: {self._queue.qsize()}')
|
||||
if data is None:
|
||||
break
|
||||
buf.extend(data)
|
||||
while len(buf) >= bytes_per_frame:
|
||||
frame = np.frombuffer(buf[:bytes_per_frame], dtype=np.int16).reshape(-1, self.channels)
|
||||
# Optional: log when a frame is yielded
|
||||
# logging.debug(f'Yielding frame of shape {frame.shape}')
|
||||
yield frame
|
||||
buf = buf[bytes_per_frame:]
|
||||
141
src/auracast/utils/sounddevice_utils.py
Normal file
141
src/auracast/utils/sounddevice_utils.py
Normal file
@@ -0,0 +1,141 @@
|
||||
import sounddevice as sd
|
||||
import os, re, json, subprocess
|
||||
|
||||
def devices_by_backend(backend_name: str):
|
||||
hostapis = sd.query_hostapis() # list of host APIs
|
||||
# find the host API index by (case-insensitive) name match
|
||||
try:
|
||||
hostapi_idx = next(
|
||||
i for i, ha in enumerate(hostapis)
|
||||
if backend_name.lower() in ha['name'].lower()
|
||||
)
|
||||
except StopIteration:
|
||||
raise ValueError(f"No host API matching {backend_name!r}. "
|
||||
f"Available: {[ha['name'] for ha in hostapis]}")
|
||||
# return (global_index, device_dict) pairs filtered by that host API
|
||||
return [(i, d) for i, d in enumerate(sd.query_devices())
|
||||
if d['hostapi'] == hostapi_idx]
|
||||
|
||||
def _pa_like_hostapi_index():
|
||||
for i, ha in enumerate(sd.query_hostapis()):
|
||||
if any(k in ha["name"] for k in ("PipeWire", "PulseAudio")):
|
||||
return i
|
||||
raise RuntimeError("PipeWire/PulseAudio host API not present in PortAudio.")
|
||||
|
||||
def _pw_dump():
|
||||
return json.loads(subprocess.check_output(["pw-dump"]))
|
||||
|
||||
def _sd_refresh():
|
||||
"""Force PortAudio to re-enumerate devices on next query.
|
||||
|
||||
sounddevice/PortAudio keeps a static device list after initialization.
|
||||
Terminating here ensures that subsequent sd.query_* calls re-initialize
|
||||
and see newly added devices (e.g., AES67 nodes created after start).
|
||||
"""
|
||||
sd._terminate() # private API, acceptable for runtime refresh
|
||||
sd._initialize()
|
||||
|
||||
def _sd_matches_from_names(pa_idx, names):
|
||||
names_l = {n.lower() for n in names if n}
|
||||
out = []
|
||||
for i, d in enumerate(sd.query_devices()):
|
||||
if d["hostapi"] != pa_idx or d["max_input_channels"] <= 0:
|
||||
continue
|
||||
dn = d["name"].lower()
|
||||
if any(n in dn for n in names_l):
|
||||
out.append((i, d))
|
||||
return out
|
||||
|
||||
def list_usb_pw_inputs():
|
||||
"""
|
||||
Return [(device_index, device_dict), ...] for PipeWire **input** nodes
|
||||
backed by **USB** devices (excludes monitor sources).
|
||||
"""
|
||||
# Refresh PortAudio so we see newly added nodes before mapping
|
||||
_sd_refresh()
|
||||
pa_idx = _pa_like_hostapi_index()
|
||||
pw = _pw_dump()
|
||||
|
||||
# Map device.id -> device.bus ("usb"/"pci"/"platform"/"network"/...)
|
||||
device_bus = {}
|
||||
for obj in pw:
|
||||
if obj.get("type") == "PipeWire:Interface:Device":
|
||||
props = (obj.get("info") or {}).get("props") or {}
|
||||
device_bus[obj["id"]] = (props.get("device.bus") or "").lower()
|
||||
|
||||
# Collect names/descriptions of USB input nodes
|
||||
usb_input_names = set()
|
||||
for obj in pw:
|
||||
if obj.get("type") != "PipeWire:Interface:Node":
|
||||
continue
|
||||
props = (obj.get("info") or {}).get("props") or {}
|
||||
media = (props.get("media.class") or "").lower()
|
||||
if "source" not in media and "stream/input" not in media:
|
||||
continue
|
||||
# skip monitor sources ("Monitor of ..." or *.monitor)
|
||||
nname = (props.get("node.name") or "").lower()
|
||||
ndesc = (props.get("node.description") or "").lower()
|
||||
if ".monitor" in nname or "monitor" in ndesc:
|
||||
continue
|
||||
bus = (props.get("device.bus") or device_bus.get(props.get("device.id")) or "").lower()
|
||||
if bus == "usb":
|
||||
usb_input_names.add(props.get("node.description") or props.get("node.name"))
|
||||
|
||||
# Map to sounddevice devices on PipeWire host API
|
||||
return _sd_matches_from_names(pa_idx, usb_input_names)
|
||||
|
||||
def list_network_pw_inputs():
|
||||
"""
|
||||
Return [(device_index, device_dict), ...] for PipeWire **input** nodes that
|
||||
look like network/AES67/RTP sources (excludes monitor sources).
|
||||
"""
|
||||
# Refresh PortAudio so we see newly added nodes before mapping
|
||||
_sd_refresh()
|
||||
pa_idx = _pa_like_hostapi_index()
|
||||
pw = _pw_dump()
|
||||
|
||||
network_input_names = set()
|
||||
for obj in pw:
|
||||
if obj.get("type") != "PipeWire:Interface:Node":
|
||||
continue
|
||||
props = (obj.get("info") or {}).get("props") or {}
|
||||
media = (props.get("media.class") or "").lower()
|
||||
if "source" not in media and "stream/input" not in media:
|
||||
continue
|
||||
nname = (props.get("node.name") or "")
|
||||
ndesc = (props.get("node.description") or "")
|
||||
# skip monitor sources
|
||||
if ".monitor" in nname.lower() or "monitor" in ndesc.lower():
|
||||
continue
|
||||
|
||||
# Heuristics for network/AES67/RTP
|
||||
text = (nname + " " + ndesc).lower()
|
||||
media_name = (props.get("media.name") or "").lower()
|
||||
node_group = (props.get("node.group") or "").lower()
|
||||
# Presence flags/keys that strongly indicate network RTP/AES67 sources
|
||||
node_network_flag = bool(props.get("node.network"))
|
||||
has_rtp_keys = any(k in props for k in (
|
||||
"rtp.session", "rtp.source.ip", "rtp.source.port", "rtp.fmtp", "rtp.rate"
|
||||
))
|
||||
has_sess_keys = any(k in props for k in (
|
||||
"sess.name", "sess.media", "sess.latency.msec"
|
||||
))
|
||||
is_network = (
|
||||
(props.get("device.bus") or "").lower() == "network" or
|
||||
node_network_flag or
|
||||
"rtp" in media_name or
|
||||
any(k in text for k in ("rtp", "sap", "aes67", "network", "raop", "airplay")) or
|
||||
has_rtp_keys or
|
||||
has_sess_keys or
|
||||
("pipewire.ptp" in node_group)
|
||||
)
|
||||
if is_network:
|
||||
network_input_names.add(ndesc or nname)
|
||||
|
||||
return _sd_matches_from_names(pa_idx, network_input_names)
|
||||
|
||||
# Example usage:
|
||||
# for i, d in list_usb_pw_inputs():
|
||||
# print(f"USB IN {i}: {d['name']} in={d['max_input_channels']}")
|
||||
# for i, d in list_network_pw_inputs():
|
||||
# print(f"NET IN {i}: {d['name']} in={d['max_input_channels']}")
|
||||
42
src/auracast/utils/webrtc_audio_input.py
Normal file
42
src/auracast/utils/webrtc_audio_input.py
Normal file
@@ -0,0 +1,42 @@
|
||||
import asyncio
|
||||
import numpy as np
|
||||
import logging
|
||||
|
||||
class WebRTCAudioInput:
|
||||
"""
|
||||
Buffer PCM samples from WebRTC and provide an async generator interface for chunked frames.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.buffer = np.array([], dtype=np.int16)
|
||||
self.lock = asyncio.Lock()
|
||||
self.data_available = asyncio.Event()
|
||||
self.closed = False
|
||||
|
||||
async def frames(self, frame_size: int):
|
||||
"""
|
||||
Async generator yielding exactly frame_size samples as numpy arrays.
|
||||
"""
|
||||
while not self.closed:
|
||||
async with self.lock:
|
||||
if len(self.buffer) >= frame_size:
|
||||
chunk = self.buffer[:frame_size]
|
||||
self.buffer = self.buffer[frame_size:]
|
||||
logging.debug(f"WebRTCAudioInput: Yielding {frame_size} samples, buffer now has {len(self.buffer)} samples remaining.")
|
||||
yield chunk
|
||||
else:
|
||||
self.data_available.clear()
|
||||
await self.data_available.wait()
|
||||
|
||||
async def put_samples(self, samples: np.ndarray):
|
||||
"""
|
||||
Add new PCM samples (1D np.int16 array, mono) to the buffer.
|
||||
"""
|
||||
async with self.lock:
|
||||
self.buffer = np.concatenate([self.buffer, samples])
|
||||
logging.debug(f"WebRTCAudioInput: Added {len(samples)} samples, buffer now has {len(self.buffer)} samples.")
|
||||
self.data_available.set()
|
||||
|
||||
async def close(self):
|
||||
"""Mark the input closed so frames() stops yielding."""
|
||||
self.closed = True
|
||||
self.data_available.set()
|
||||
24
src/scripts/list_pw_nodes.py
Normal file
24
src/scripts/list_pw_nodes.py
Normal file
@@ -0,0 +1,24 @@
|
||||
import sounddevice as sd, pprint
|
||||
from auracast.utils.sounddevice_utils import devices_by_backend, list_usb_pw_inputs, list_network_pw_inputs
|
||||
|
||||
|
||||
print("PortAudio library:", sd._libname)
|
||||
print("PortAudio version:", sd.get_portaudio_version())
|
||||
print("\nHost APIs:")
|
||||
pprint.pprint(sd.query_hostapis())
|
||||
print("\nDevices:")
|
||||
pprint.pprint(sd.query_devices())
|
||||
|
||||
# Example: only PulseAudio devices on Linux
|
||||
print("\nOnly PulseAudio devices:")
|
||||
for i, d in devices_by_backend("PulseAudio"):
|
||||
print(f"{i}: {d['name']} in={d['max_input_channels']} out={d['max_output_channels']}")
|
||||
|
||||
|
||||
print("Network pw inputs:")
|
||||
for i, d in list_network_pw_inputs():
|
||||
print(f"{i}: {d['name']} in={d['max_input_channels']}")
|
||||
|
||||
print("USB pw inputs:")
|
||||
for i, d in list_usb_pw_inputs():
|
||||
print(f"{i}: {d['name']} in={d['max_input_channels']}")
|
||||
6
src/service/aes67/90-pipewire-aes67-ptp.rules
Normal file
6
src/service/aes67/90-pipewire-aes67-ptp.rules
Normal file
@@ -0,0 +1,6 @@
|
||||
# This file was installed by PipeWire project for its pipewire-aes67
|
||||
#
|
||||
# This is used to give readonly access to the PTP hardware clock.
|
||||
# PipeWire uses this to follow PTP grandmaster time. It should be synced by another service
|
||||
#
|
||||
KERNEL=="ptp[0-9]*", MODE="0644"
|
||||
114
src/service/aes67/pipewire-aes67.conf
Normal file
114
src/service/aes67/pipewire-aes67.conf
Normal file
@@ -0,0 +1,114 @@
|
||||
# AES67 config file for PipeWire version "1.2.7" #
|
||||
#
|
||||
# Copy and edit this file in /etc/pipewire for system-wide changes
|
||||
# or in ~/.config/pipewire for local changes.
|
||||
#
|
||||
# It is also possible to place a file with an updated section in
|
||||
# /etc/pipewire/pipewire-aes67.conf.d/ for system-wide changes or in
|
||||
# ~/.config/pipewire/pipewire-aes67.conf.d/ for local changes.
|
||||
#
|
||||
|
||||
context.properties = {
|
||||
## Configure properties in the system.
|
||||
default.clock.rate = 48000
|
||||
default.clock.allowed-rates = [ 48000 ]
|
||||
# Enforce 3ms quantum on this AES67 PipeWire instance
|
||||
clock.force-quantum = 144
|
||||
default.clock.quantum = 144
|
||||
#mem.warn-mlock = false
|
||||
#mem.allow-mlock = true
|
||||
#mem.mlock-all = false
|
||||
#log.level = 2
|
||||
|
||||
#default.clock.quantum-limit = 8192
|
||||
}
|
||||
|
||||
context.spa-libs = {
|
||||
support.* = support/libspa-support
|
||||
}
|
||||
|
||||
context.objects = [
|
||||
# An example clock reading from /dev/ptp0. You can also specify the network interface name,
|
||||
# pipewire will query the interface for the current active PHC index. Another option is to
|
||||
# sync the ptp clock to CLOCK_TAI and then set clock.id = tai, keep in mind that tai may
|
||||
# also be synced by a NTP client.
|
||||
# The precedence is: device, interface, id
|
||||
{ factory = spa-node-factory
|
||||
args = {
|
||||
factory.name = support.node.driver
|
||||
node.name = PTP0-Driver
|
||||
node.group = pipewire.ptp0
|
||||
# This driver should only be used for network nodes marked with group
|
||||
priority.driver = 100000
|
||||
clock.name = "clock.system.ptp0"
|
||||
### Please select the PTP hardware clock here
|
||||
# Interface name is the preferred method of specifying the PHC
|
||||
clock.interface = "eth0"
|
||||
#clock.device = "/dev/ptp0"
|
||||
#clock.id = tai
|
||||
# Lower this in case of periodic out-of-sync
|
||||
resync.ms = 1.5
|
||||
object.export = true
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
context.modules = [
|
||||
{ name = libpipewire-module-rt
|
||||
args = {
|
||||
nice.level = -11
|
||||
#rt.prio = 83
|
||||
#rt.time.soft = -1
|
||||
#rt.time.hard = -1
|
||||
}
|
||||
flags = [ ifexists nofail ]
|
||||
}
|
||||
{ name = libpipewire-module-protocol-native }
|
||||
{ name = libpipewire-module-client-node }
|
||||
{ name = libpipewire-module-spa-node-factory }
|
||||
{ name = libpipewire-module-adapter }
|
||||
{ name = libpipewire-module-rtp-sap
|
||||
args = {
|
||||
### Please select the interface here
|
||||
local.ifname = eth0
|
||||
sap.ip = 239.255.255.255
|
||||
sap.port = 9875
|
||||
net.ttl = 32
|
||||
net.loop = false
|
||||
# If you use another PTPv2 daemon supporting management
|
||||
# messages over a UNIX socket, specify its path here
|
||||
ptp.management-socket = "/var/run/ptp4lro"
|
||||
|
||||
stream.rules = [
|
||||
{
|
||||
matches = [
|
||||
{
|
||||
rtp.session = "~.*"
|
||||
}
|
||||
]
|
||||
actions = {
|
||||
create-stream = {
|
||||
node.virtual = false
|
||||
media.class = "Audio/Source"
|
||||
device.api = aes67
|
||||
# You can adjust the latency buffering here. Use integer values only
|
||||
sess.latency.msec = 6
|
||||
node.latency = "144/48000"
|
||||
node.group = pipewire.ptp0
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
matches = [
|
||||
{
|
||||
sess.sap.announce = true
|
||||
}
|
||||
]
|
||||
actions = {
|
||||
announce-stream = {}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
]
|
||||
19
src/service/aes67/ptp_aes67_1.conf
Normal file
19
src/service/aes67/ptp_aes67_1.conf
Normal file
@@ -0,0 +1,19 @@
|
||||
[global]
|
||||
priority1 255
|
||||
priority2 254
|
||||
# Lower = more likely to become Grandmaster. Keep the same on both for "either can be master".
|
||||
domainNumber 0
|
||||
# Default domain
|
||||
logSyncInterval -3
|
||||
# AES67 profile: Sync messages every 125ms
|
||||
logAnnounceInterval 1
|
||||
# Announce messages every 2s (AES67 default)
|
||||
logMinDelayReqInterval 0
|
||||
dscp_event 46
|
||||
# QoS for event messages
|
||||
dscp_general 0
|
||||
# QoS for general messages
|
||||
step_threshold 1
|
||||
# Fast convergence on time jumps
|
||||
|
||||
tx_timestamp_timeout 20
|
||||
15
src/service/auracast-frontend.service
Normal file
15
src/service/auracast-frontend.service
Normal file
@@ -0,0 +1,15 @@
|
||||
[Unit]
|
||||
Description=Auracast Frontend HTTPS Server
|
||||
# Ensure backend is running as a user service before starting frontend
|
||||
After=auracast-server.service network.target
|
||||
Wants=auracast-server.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/home/caster/bumble-auracast/src/auracast/server
|
||||
ExecStart=/home/caster/bumble-auracast/src/auracast/server/start_frontend_https.sh
|
||||
Restart=on-failure
|
||||
Environment=LOG_LEVEL=INFO
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
14
src/service/auracast-script.service
Normal file
14
src/service/auracast-script.service
Normal file
@@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=Auracast Multicast Script
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/home/caster/bumble-auracast
|
||||
ExecStart=/home/caster/.local/bin/poetry run python src/auracast/multicast_script.py
|
||||
Restart=on-failure
|
||||
Environment=PYTHONUNBUFFERED=1
|
||||
Environment=LOG_LEVEL=INFO
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
14
src/service/auracast-server.service
Normal file
14
src/service/auracast-server.service
Normal file
@@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=Auracast Backend Server
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/home/caster/bumble-auracast
|
||||
ExecStart=/home/caster/.local/bin/poetry run python src/auracast/server/multicast_server.py
|
||||
Restart=on-failure
|
||||
Environment=PYTHONUNBUFFERED=1
|
||||
Environment=LOG_LEVEL=INFO
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
11
src/service/pipewire-aes67.service
Normal file
11
src/service/pipewire-aes67.service
Normal file
@@ -0,0 +1,11 @@
|
||||
[Unit]
|
||||
Description=PipeWire AES67 Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/usr/bin/pipewire-aes67 -c /home/caster/bumble-auracast/src/service/aes67/pipewire-aes67.conf
|
||||
Restart=on-failure
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
13
src/service/pipewire/99-lowlatency.conf
Normal file
13
src/service/pipewire/99-lowlatency.conf
Normal file
@@ -0,0 +1,13 @@
|
||||
context.properties = {
|
||||
default.clock.rate = 48000
|
||||
default.clock.allowed-rates = [ 48000 ]
|
||||
default.clock.quantum = 144 # 144/48000 = 3.0 ms
|
||||
default.clock.min-quantum = 32
|
||||
default.clock.max-quantum = 256
|
||||
}
|
||||
|
||||
stream.properties = {
|
||||
# Prefer to let specific nodes (e.g. AES67) or clients set node.latency.
|
||||
node.latency = "144/48000"
|
||||
resample.quality = 0
|
||||
}
|
||||
13
src/service/ptp_aes67.service
Normal file
13
src/service/ptp_aes67.service
Normal file
@@ -0,0 +1,13 @@
|
||||
[Unit]
|
||||
Description=PTP AES67 Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/usr/sbin/ptp4l -i eth0 -f /home/caster/bumble-auracast/src/service/aes67/ptp_aes67_1.conf
|
||||
Restart=on-failure
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
84
src/service/status_auracast_services.sh
Executable file
84
src/service/status_auracast_services.sh
Executable file
@@ -0,0 +1,84 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Print status of relevant services and warn if mutually exclusive ones are running.
|
||||
# Utilities (network audio):
|
||||
# - ptp_aes67.service (system)
|
||||
# - pipewire-aes67.service (user)
|
||||
# App services (mutually exclusive groups):
|
||||
# - auracast-script.service (user)
|
||||
# - auracast-server.service (prefer user; also check system)
|
||||
# - auracast-frontend.service (system)
|
||||
|
||||
print_status() {
|
||||
local scope="$1" # "system" or "user"
|
||||
local unit="$2"
|
||||
local act="unknown"
|
||||
local ena="unknown"
|
||||
if [[ "$scope" == "user" ]]; then
|
||||
act=$(systemctl --user is-active "$unit" 2>/dev/null || true)
|
||||
ena=$(systemctl --user is-enabled "$unit" 2>/dev/null || true)
|
||||
else
|
||||
act=$(systemctl is-active "$unit" 2>/dev/null || true)
|
||||
ena=$(systemctl is-enabled "$unit" 2>/dev/null || true)
|
||||
fi
|
||||
act="${act:-unknown}"
|
||||
ena="${ena:-unknown}"
|
||||
printf " - %-24s [%s] active=%-10s enabled=%s\n" "$unit" "$scope" "$act" "$ena"
|
||||
}
|
||||
|
||||
is_active() {
|
||||
local scope="$1" unit="$2"
|
||||
if [[ "$scope" == "user" ]]; then
|
||||
systemctl --user is-active "$unit" &>/dev/null && echo yes || echo no
|
||||
else
|
||||
systemctl is-active "$unit" &>/dev/null && echo yes || echo no
|
||||
fi
|
||||
}
|
||||
|
||||
hr() { printf '\n%s\n' "----------------------------------------"; }
|
||||
|
||||
printf "AURACAST SERVICE STATUS\n"
|
||||
hr
|
||||
|
||||
printf "Utilities (required for network audio)\n"
|
||||
print_status system ptp_aes67.service
|
||||
print_status user pipewire-aes67.service
|
||||
|
||||
PTP_ACTIVE=$(is_active system ptp_aes67.service)
|
||||
PW_ACTIVE=$(is_active user pipewire-aes67.service)
|
||||
|
||||
if [[ "$PTP_ACTIVE" == "yes" && "$PW_ACTIVE" == "yes" ]]; then
|
||||
echo " ✓ Utilities ready for AES67/network audio"
|
||||
else
|
||||
echo " ! Utilities not fully active (AES67/network audio may not work)"
|
||||
fi
|
||||
|
||||
hr
|
||||
printf "Application services (mutually exclusive)\n"
|
||||
print_status user auracast-script.service
|
||||
print_status user auracast-server.service
|
||||
print_status system auracast-server.service
|
||||
print_status system auracast-frontend.service
|
||||
|
||||
SCRIPT_ACTIVE=$(is_active user auracast-script.service)
|
||||
SERVER_USER_ACTIVE=$(is_active user auracast-server.service)
|
||||
SERVER_SYS_ACTIVE=$(is_active system auracast-server.service)
|
||||
FRONT_ACTIVE=$(is_active system auracast-frontend.service)
|
||||
|
||||
# Consider server active if either user or system instance is active
|
||||
if [[ "$SERVER_USER_ACTIVE" == "yes" || "$SERVER_SYS_ACTIVE" == "yes" ]]; then
|
||||
SERVER_ACTIVE=yes
|
||||
else
|
||||
SERVER_ACTIVE=no
|
||||
fi
|
||||
|
||||
if [[ "$SCRIPT_ACTIVE" == "yes" && ( "$SERVER_ACTIVE" == "yes" || "$FRONT_ACTIVE" == "yes" ) ]]; then
|
||||
echo " ! WARNING: 'auracast-script' and 'server/frontend' are running together (mutually exclusive)."
|
||||
fi
|
||||
|
||||
hr
|
||||
printf "Hints\n"
|
||||
echo " - Follow logs (user): journalctl --user -u auracast-script.service -f"
|
||||
echo " - Follow logs (server): journalctl --user -u auracast-server.service -f || journalctl -u auracast-server.service -f"
|
||||
echo " - Frontend logs: journalctl -u auracast-frontend.service -f"
|
||||
21
src/service/stop_aes67.sh
Normal file
21
src/service/stop_aes67.sh
Normal file
@@ -0,0 +1,21 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# This script stops and disables the AES67 services
|
||||
# Requires sudo privileges
|
||||
|
||||
# Stop services
|
||||
sudo systemctl stop ptp_aes67.service
|
||||
systemctl --user stop pipewire-aes67.service
|
||||
|
||||
# Disable services from starting on boot
|
||||
sudo systemctl disable ptp_aes67.service
|
||||
systemctl --user disable pipewire-aes67.service
|
||||
|
||||
echo "\n--- ptp_aes67.service status ---"
|
||||
sudo systemctl status ptp_aes67.service --no-pager
|
||||
|
||||
echo "\n--- pipewire-aes67.service status (user) ---"
|
||||
systemctl --user status pipewire-aes67.service --no-pager
|
||||
|
||||
echo "AES67 services stopped, disabled, and status printed successfully."
|
||||
15
src/service/stop_auracast_script.sh
Normal file
15
src/service/stop_auracast_script.sh
Normal file
@@ -0,0 +1,15 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# This script stops and disables the auracast-script user service
|
||||
|
||||
# Stop service
|
||||
systemctl --user stop auracast-script.service || true
|
||||
|
||||
# Disable service from starting on login
|
||||
systemctl --user disable auracast-script.service || true
|
||||
|
||||
echo "\n--- auracast-script.service status (user) ---"
|
||||
systemctl --user status auracast-script.service --no-pager || true
|
||||
|
||||
echo "auracast-script service stopped, disabled, and status printed successfully."
|
||||
22
src/service/stop_server_and_frontend.sh
Normal file
22
src/service/stop_server_and_frontend.sh
Normal file
@@ -0,0 +1,22 @@
|
||||
# This script stops and disables the auracast-server and auracast-frontend services
|
||||
# Requires sudo privileges
|
||||
|
||||
echo "Stopping auracast-server.service..."
|
||||
systemctl --user stop auracast-server.service
|
||||
|
||||
echo "Disabling auracast-server.service (user)..."
|
||||
systemctl --user disable auracast-server.service
|
||||
|
||||
echo "Stopping auracast-frontend.service ..."
|
||||
sudo systemctl stop auracast-frontend.service
|
||||
|
||||
echo "Disabling auracast-frontend.service ..."
|
||||
sudo systemctl disable auracast-frontend.service
|
||||
|
||||
echo "\n--- auracast-server.service status ---"
|
||||
systemctl --user status auracast-server.service --no-pager
|
||||
|
||||
echo "\n--- auracast-frontend.service status ---"
|
||||
sudo systemctl status auracast-frontend.service --no-pager
|
||||
|
||||
echo "auracast-server and auracast-frontend services stopped, disabled, and status printed successfully."
|
||||
23
src/service/update_and_run_auracast_script.sh
Normal file
23
src/service/update_and_run_auracast_script.sh
Normal file
@@ -0,0 +1,23 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# This script installs, enables, and restarts the auracast-script user service
|
||||
# No sudo is required as it is a user service
|
||||
|
||||
# Copy user service file for auracast-script
|
||||
mkdir -p /home/caster/.config/systemd/user
|
||||
cp /home/caster/bumble-auracast/src/service/auracast-script.service /home/caster/.config/systemd/user/auracast-script.service
|
||||
|
||||
# Reload systemd to recognize new/updated services
|
||||
systemctl --user daemon-reload
|
||||
|
||||
# Enable service to start on user login
|
||||
systemctl --user enable auracast-script.service
|
||||
|
||||
# Restart service
|
||||
systemctl --user restart auracast-script.service
|
||||
|
||||
echo "\n--- auracast-script.service status (user) ---"
|
||||
systemctl --user status auracast-script.service --no-pager
|
||||
|
||||
echo "auracast-script service updated, enabled, restarted, and status printed successfully."
|
||||
41
src/service/update_and_run_pw_aes67.sh
Normal file
41
src/service/update_and_run_pw_aes67.sh
Normal file
@@ -0,0 +1,41 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# This script installs, enables, and restarts the AES67 services
|
||||
# Requires sudo privileges
|
||||
|
||||
# Copy system service file for ptp_aes67
|
||||
sudo cp /home/caster/bumble-auracast/src/service/ptp_aes67.service /etc/systemd/system/ptp_aes67.service
|
||||
|
||||
# Copy user service file for pipewire-aes67
|
||||
mkdir -p /home/caster/.config/systemd/user
|
||||
cp /home/caster/bumble-auracast/src/service/pipewire-aes67.service /home/caster/.config/systemd/user/pipewire-aes67.service
|
||||
|
||||
# Install PipeWire user config to persist 3ms@48kHz (default.clock.quantum=144)
|
||||
mkdir -p /home/caster/.config/pipewire/pipewire.conf.d
|
||||
cp /home/caster/bumble-auracast/src/service/pipewire/99-lowlatency.conf /home/caster/.config/pipewire/pipewire.conf.d/99-lowlatency.conf
|
||||
|
||||
# Reload systemd to recognize new/updated services
|
||||
sudo systemctl daemon-reload
|
||||
systemctl --user daemon-reload
|
||||
|
||||
# Enable services to start on boot
|
||||
sudo systemctl enable ptp_aes67.service
|
||||
systemctl --user enable pipewire-aes67.service
|
||||
|
||||
# Restart services
|
||||
systemctl --user restart pipewire.service pipewire-pulse.service
|
||||
sudo systemctl restart ptp_aes67.service
|
||||
systemctl --user restart pipewire-aes67.service
|
||||
|
||||
echo "\n--- pipewire.service status (user) ---"
|
||||
systemctl --user status pipewire.service --no-pager
|
||||
|
||||
echo "\n--- ptp_aes67.service status ---"
|
||||
sudo systemctl status ptp_aes67.service --no-pager
|
||||
|
||||
echo "\n--- pipewire-aes67.service status (user) ---"
|
||||
systemctl --user status pipewire-aes67.service --no-pager
|
||||
|
||||
|
||||
echo "AES67 services updated, enabled, restarted, and status printed successfully."
|
||||
33
src/service/update_and_run_server_and_frontend.sh
Normal file
33
src/service/update_and_run_server_and_frontend.sh
Normal file
@@ -0,0 +1,33 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# This script installs, enables, and restarts the auracast-server and auracast-frontend services
|
||||
# Requires sudo privileges
|
||||
|
||||
# Copy system service file for frontend
|
||||
sudo cp /home/caster/bumble-auracast/src/service/auracast-frontend.service /etc/systemd/system/auracast-frontend.service
|
||||
|
||||
# Copy user service file for backend (now using WantedBy=default.target)
|
||||
mkdir -p /home/caster/.config/systemd/user
|
||||
cp /home/caster/bumble-auracast/src/service/auracast-server.service /home/caster/.config/systemd/user/auracast-server.service
|
||||
|
||||
# Reload systemd for frontend
|
||||
sudo systemctl daemon-reload
|
||||
# Reload user systemd for server
|
||||
systemctl --user daemon-reload
|
||||
|
||||
# Enable frontend to start on boot (system)
|
||||
sudo systemctl enable auracast-frontend.service
|
||||
# Enable server to start on boot (user)
|
||||
systemctl --user enable auracast-server.service
|
||||
|
||||
# Restart both
|
||||
sudo systemctl restart auracast-frontend.service
|
||||
systemctl --user restart auracast-server.service
|
||||
|
||||
echo "\n--- auracast-frontend.service status ---"
|
||||
sudo systemctl status auracast-frontend.service --no-pager
|
||||
|
||||
echo "\n--- auracast-server.service status---"
|
||||
systemctl --user status auracast-server.service --no-pager
|
||||
echo "auracast-server and auracast-frontend services updated, enabled, restarted, and status printed successfully."
|
||||
44
tests/test_audio_device_io.py
Normal file
44
tests/test_audio_device_io.py
Normal file
@@ -0,0 +1,44 @@
|
||||
"""Utility to diagnose Bumble SoundDeviceAudioInput compatibility.
|
||||
|
||||
Run inside the project venv:
|
||||
python -m tests.usb_audio_diag [rate]
|
||||
It enumerates all PortAudio input devices and tries to open each with Bumble's
|
||||
create_audio_input using the URI pattern `device:<index>` with an explicit input_format of `int16le,<rate>,1`.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
import asyncio
|
||||
import sys
|
||||
|
||||
import sounddevice as sd # type: ignore
|
||||
from bumble.audio import io as audio_io # type: ignore
|
||||
|
||||
RATE = int(sys.argv[1]) if len(sys.argv) > 1 else 48000
|
||||
|
||||
|
||||
aSYNC = asyncio.run
|
||||
|
||||
|
||||
async def try_device(index: int, rate: int = 48000) -> None:
|
||||
input_uri = f"device:{index}"
|
||||
try:
|
||||
audio_input = await audio_io.create_audio_input(input_uri, f"int16le,{rate},1")
|
||||
fmt = await audio_input.open()
|
||||
print(f"\033[32m✔︎ {input_uri} -> {fmt.channels}ch @ {fmt.sample_rate}Hz\033[0m")
|
||||
if hasattr(audio_input, "aclose"):
|
||||
await audio_input.aclose()
|
||||
except Exception as exc: # pylint: disable=broad-except
|
||||
print(f"\033[31m✗ {input_uri}: {exc}\033[0m")
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
print(f"Trying PortAudio input devices with rate {RATE} Hz\n")
|
||||
for idx, dev in enumerate(sd.query_devices()):
|
||||
if dev["max_input_channels"] > 0 and "(hw:" in dev["name"].lower():
|
||||
name = dev["name"]
|
||||
print(f"[{idx}] {name}")
|
||||
await try_device(idx, RATE)
|
||||
print()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
aSYNC(main())
|
||||
Reference in New Issue
Block a user