Contribute
Reproduce, Author, Extend
This page shows how to (1) reproduce results, (2) author new scenarios in JSON, (3) extend the engine with new ethical modules, and (4) export runs for review. Facts live in scenarios; values live in modules. Keep that boundary sharp.
1. Quickstart
Local
- Clone or fork the repo.
- Open
index.html
in a modern browser. No build step required. - Open
results.html
(baseline) andinteractive.html
(live controls).
GitHub Pages
- Enable Pages in your repo settings (source: root or
/docs
). - Visit your Pages URL and test
interactive.html
. - Confirm that
data/scenarios.json
is publicly readable.
3. Module Extension Guide
Modules live in js/engine.js
. Each should either (a) return a scalar choiceworthiness for every action, or (b) gate actions via admissibility constraints.
3.1 Shape
// Scalar module
function CW_newTheory(scn) {
// return Array<[actionId, number]> or Map(actionId -> number)
}
// Constraint module (admissibility)
function gk_newConstraint(scn, actionId) {
// return { admissible: boolean, reasons: string[] }
}
3.2 Example: Rawlsian maximin
// Compute minimum expected well-being across persons
function rawlsRawScores(scn) {
// You define how to derive per-person utility from scn (e.g., survival * years_left)
// Return pairs: [[actionId, score], ...]
}
3.3 Wiring it in
- Compute raw scores per action.
- Normalize to [0,1] within the scenario (min–max).
- Add a credence weight in the aggregator (alongside consequentialism and virtue).
4. Export & Reproducibility
Use Interactive → “Download current run (JSON)”. Each bundle captures:
- Timestamp
- Ledger version
- Settings (credences, promise rule, virtue weights)
- Scenario IDs
- Per-scenario ranking, admissibility notes, explanation set
{
"timestamp": "2025-09-05T12:34:56.789Z",
"ledger_version": "0.1",
"settings": { "credences": {"p_cons":0.5,"p_virtue":0.5}, "promise": {"enabled":true,"theta_lives":1.0} },
"scenarios": ["triage-vent-v1", "evac-promise-v1"],
"results": [
{ "scenarioId": "triage-vent-v1", "ranking": [["a1_allocate_A",0.92],["a3_lottery",0.61],...],
"admissibility": [["a1_allocate_A",{"admissible":true,"reasons":[]}]], "explanations": ["E1: ..."] }
]
}
5. Axiom Ledger (Ground Rules)
Short, explicit premises that shape the engine’s behavior.
- Persons have equal moral worth.
- Comparable goods are explicitly scaled within each scenario.
- Uncertainty is modeled; guesses are distributions.
- Rights and duties act as side-constraints (lexicographic priority).
- Doing vs allowing matters but is not decisive alone.
- Consent changes scores and constraint triggers.
- Priority to the worse-off via concavity or priority weights.
- Like cases alike; only prognosis-relevant differences justify unequal treatment.
- Minimal explanations identify the few premises that did the work.
- Pluralism is explicit; red lines are not aggregative.
- Culture tunes parameters, not personhood.
- Reversibility check flags unstable rules.
6. FAQ
Why strict JSON and not YAML?
We want zero ambiguity and native browser parsing via fetch
→ response.json()
.
Can I add culture-specific parameters?
Yes-under context.culture
. Modules may read them to tune thresholds (e.g., consent).
Where do I put new modules?
Add functions in js/engine.js
, mirror the normalization, and update aggregation weights.
What counts as a deontic constraint vs a soft penalty?
If it’s a right/duty that should not be traded off, encode as admissibility (gate). Otherwise, use scalar modules.
© Ethics Testbed · MIT