Opinion • Health IT PolicyImagine a hospital’s privacy policy so precise that software can enforce it in real time, or a clinical guideline that an app can execute without ambiguity. In a data-driven world, if a policy cannot be computed, should it exist at all? This article argues yes – policies in health systems must be computable by design, or we risk having rules that are merely wishful thinking on paper. We’ll compare traditional policy-making with computable policies (with a handy chart), explore how Amazon’s automated reasoning is turning abstract rules into enforceable code, and dive into how standards like FHIR make health policies machine-executable. The goal: show why computable policies aren’t just tech fantasy, but an urgent necessity for modern healthcare.

Traditional vs. Computable Policies: What’s the Difference?

Policies govern everything in healthcare – from patient data sharing and privacy, to clinical workflows and insurance rules. Traditionally, these policies live in binders and PDFs, written in natural language by committees and lawyers. They’re human-readable guidelines that rely on training and trust for implementation. Computable policies, on the other hand, are defined in a format that software can understand and act on automatically. Let’s contrast the two approaches:

Traditional Policy-Making Computable Policy-Making
Format: Narrative text (legal or procedural documents). Format: Machine-readable rules or code (e.g. logic statements, JSON/YAML).
Clarity: Often contains ambiguity or relies on human interpretation ([
     Enhancing narrative clinical guidance with computer-readable artifacts: Authoring FHIR implementation guides based on WHO recommendations - PMC
    ](<https://pmc.ncbi.nlm.nih.gov/articles/PMC8524423/#:~:text=Narrative%20clinical%20guidelines%20often%20contain,planning%20and%20sexually%20transmitted%20infections>)). | *Clarity:* Unambiguous and explicit – defined by formal syntax, leaving little room for interpretation. |

| Enforcement: Relies on people to interpret and follow (or manual audits for compliance). | Enforcement: Automatically enforced by software in real time (systems can compute compliance). | | Scalability: Difficult to scale – humans must read/remember/apply the rules (error-prone) (Custom policy checks help democratize automated reasoning - Amazon Science). | Scalability: Highly scalable – rules can be evaluated by computers across thousands of transactions consistently. | | Testing & Updates: Hard to test outcomes; changes require retraining humans. | Testing & Updates: Easy to simulate and verify outcomes with software; update the code and all systems adhere immediately. | | Consistency: Implementation may vary by individual or department, leading to inconsistency. | Consistency: One canonical version of the rule ensures the same decision everywhere, every time. |

In short: Traditional policies are written for people, computable policies are written for people and machines. As an old saying in software goes, “the code is the single source of truth.” For policies, making them code means the intended truth and the applied truth finally match.

Why Policies Should Be Computable (or Why Have Them at All?)

Let’s address the bold claim: if a policy cannot be computed, it should not exist. This doesn’t mean we abolish every high-level principle, but it does mean any operational policy should be defined precisely enough that a computer could execute it. Why is this so important?

Bottom line: A policy that isn’t precisely defined and executable might as well be a suggestion. In high-stakes environments like healthcare, suggestions aren’t good enough. Either we make the rule clear enough for a computer to apply, or we probably haven’t thought it through well enough to rely on it. As harsh as “should not exist” sounds, it’s a push to say: don’t put policies on paper and assume the job is done – encode them so the intent becomes reality.

Real-World Example: AWS’s Automated Reasoning – Abstracting Rules into Code

This all sounds great in theory, but is anyone actually doing it? Yes, and one pioneer is Amazon. Outside of healthcare, Amazon Web Services (AWS) has heavily invested in “automated reasoning” to turn policies and configurations into mathematical models that machines can analyze. Why? Because their scale and stakes (think security) demand nothing less.

One example is AWS Identity and Access Management (IAM) policies – essentially rules that govern who can do what in a cloud environment. AWS developed an internal tool called Zelkova that takes an IAM policy (written in JSON) and converts it into a precise logical formula (Custom policy checks help democratize automated reasoning - Amazon Science). For instance, imagine a simple storage bucket policy that says “Allow anyone to add objects to Bucket XYZ.” Zelkova will represent that policy as something like:

(Action = "s3:PutObject") ∧ (Resource = "arn:aws:s3:::BucketXYZ")