Dev-time TX kill-switch¶
DAPPS is pre-1.0 software. While it is, every running node checks a URL controlled by the project author every five minutes; the URL can ask nodes to pause transmissions. This page exists so you know what that means before you put a node on the air.
What is it¶
Every running DAPPS daemon polls
every five minutes. The response is small JSON:
When txAllowed is false and the local callsign matches one of the appliesTo patterns (or the list is ["*"]), the daemon's bearer-level TX gate closes. While closed, no DAPPS-originated frame produces an on-air emission - forwards, floods, beacons, probes, polls, ACKs are all blocked at the AGW frame / RHP open / UDP send level. Inbound RX is unaffected; AX.25 disconnect and node-control admin frames continue to flow so the BPQ/XR session stays usable and in-flight sessions tear down cleanly.
The dashboard shows a red banner across every page when the gate is closed, with the reason text from the JSON.
Why it is here¶
DAPPS is early-stage software running on shared amateur radio bandwidth. A bug in a release could, in principle, cause a fleet of nodes to transmit more than they should. An operator may not be at the keyboard to catch a regression, especially overnight or during a working day. The kill-switch lets the author signal every running node to pause within a few minutes of spotting a problem, without coordinating with each operator individually.
This is a software-development safety net, not a regulatory mechanism, not a moderation tool. The aim is to keep the worst case ("I shipped a bug that hammers 144.950") bounded.
What's not configurable¶
- The polling cannot be disabled.
- The URL cannot be changed at runtime.
- The cadence, staleness window, and fail-open behaviour are fixed.
The values are constants in the source (TxKillSwitchPoller.cs) and the published binaries do not expose them as settings. This is deliberate: a configurable kill-switch wouldn't reliably reach every node, which is the whole point.
If you'd rather not run software with this in place, deferring until 1.0 is a reasonable call.
What you can do¶
- See the current state in the dashboard banner and at
GET /TxControl/status. - Continue to use the operator master TX-stop button independently. It is a separate signal; closing the local toggle pauses TX even when the remote signal is allowing, and reopening the local toggle does not override a remote block.
- Monitor outbound HTTPS traffic to the kill-switch URL if you want to verify what's being polled. Nothing operator-identifying is sent: the request is a plain
GETwith no body and no auth. - Read the
Services/TxKillSwitchPoller.cssource. The whole mechanism is around two hundred lines.
Failure modes¶
- URL unreachable at startup: the gate stays open. A new install with no internet does not silently refuse to TX.
- URL unreachable after a successful poll: the daemon keeps using the most recent successful state for thirty minutes (the staleness window). After that it falls back to allow.
- Malformed JSON: same as unreachable - the failure is logged at debug level and the previous state is kept.
The staleness window is short enough that a genuinely stuck poller won't keep trusting hours-old state, and long enough to ride out the kind of network blip that's common on a domestic connection. Fail-open is the conservative posture for an amateur radio installation: an operator with a working RF stack and a flaky internet connection is not made worse off by losing transmissions on top.
When it goes away¶
Before 1.0. Once the software is mature enough to be trusted to operators without the safety net, this whole subsystem is removed - not made configurable per-fleet, just deleted. The hardcoded URL becomes a dead endpoint at that point.
If the project pivots and a configurable per-fleet kill-switch becomes useful (a regional sysop wanting to gate their own nodes during a contest, for example), that's a separate feature with a separate design and a separate set of tradeoffs to argue through. It will not inherit the dev-time URL or the dev-time defaults.
What the network sees¶
A GET request to the URL above, every five minutes, from every running DAPPS node. No body, no headers beyond a User-Agent generated by the .NET HTTP stack, no cookies, no auth. The response is cached only in process memory.
If polling that URL is itself a problem in your environment (an isolated network, a regulatory concern about outbound traffic), waiting for 1.0 before deploying may be the right call. Fail-open keeps TX working when the URL is unreachable, but the request itself still happens on the polling cadence.
Source¶
- Poller:
src/dapps/dapps.core/Services/TxKillSwitchPoller.cs - Gate composition:
src/dapps/dapps.core/Services/SystemOptionsBackedTxGate.cs - Bearer-level enforcement:
src/dapps/dapps.client/Tx/IDappsTxGate.cs