Most "AI in healthcare" content lives in the future tense. Conferences, deck slides, vendor demos that end with "and once it's deployed in your environment..." The actual question on the floor of a DME or a small provider office is shorter: what can I build this week, with the tools I already have, that gives my team an hour back tomorrow?
The honest answer is more than you'd think. Below are three automations any provider or DME can stand up today using ChatGPT or Claude. None of them require a data scientist. None of them require a new platform. All of them are running, in some form, inside operations we work with right now.
A word on safety before the recipes. Don't paste real patient data into a personal ChatGPT or Claude account. That's a HIPAA breach, full stop. The version of this work that's defensible runs through a Business Associate Agreement — either Anthropic's API with a signed BAA, or Claude Code on a cloud you already have a BAA with. Until that's in place, build and test these prompts on synthetic or fully de-identified data only. We cover the 30-minute setup that makes this safe in our upcoming webinar on May 21.
With that said:
01 — Denial-letter triage
The problem. A biller opens a payer portal in the morning and sees forty denials. Most of them aren't real denials — they're CO-16s missing a modifier, CO-50s on a Dx that needs a different one, CO-97s where the rental window slipped. The biller knows this. The biller does not have time to look at forty of them before the day starts.
The automation. Feed the denial reason codes (CARC/RARC) and any free-text remarks into the model. Ask for a one-line classification, the most likely fix, and the priority order to work the queue.
You are an experienced DME billing analyst. For each denial below, return:
- Category: { fixable_resubmit | needs_documentation | appeal_required | true_writeoff | review_with_provider }
- Most likely fix (one sentence, specific)
- Priority: 1-5 (1 = work first, highest dollar / fastest fix)
Do not invent payer rules. If you're not sure, return "review_with_provider" and explain why.
Denials:
[paste denial table — claim ID, payer, CARC, RARC, remark, billed amount]
What you get back. A ranked queue. The CO-16 "missing modifier" denials get flagged as fixable_resubmit with the specific modifier suggested. The harder ones land in needs_documentation with a note on what to pull. The biller works the high-value, fast-fix items first.
The watch-out. The model is wrong sometimes — especially on payer-specific edge rules. Treat the output as a triage list, not a final answer. The biller still makes the call. The win is that they're not deciding the order from scratch every morning.
Time saved. In a DME we work with, this turned a 45-minute morning sort into a 6-minute review. That's roughly 3 hours a week, 150 hours a year, on one person's desk.
02 — Fax intake routing
The problem. Inbound faxes are the single most chaotic source of work in a DME or provider office. Refill requests, new orders, denials, prior-auth approvals, signed CMNs, marketing junk — all in the same queue. Someone has to open each one, figure out what it is, and route it. That "someone" is usually the most experienced person in the office, doing a task a high-schooler could do if it weren't for the judgment call.
The automation. Run each fax (after OCR) through a classifier prompt. Return a structured ticket: type, patient identifier, urgency, what to do next.
Classify this fax. Return JSON only.
Schema:
{
"type": "new_order | refill | cmn | prior_auth_response | denial | medical_records | other",
"patient": { "name_or_id": "...", "dob_visible": true|false },
"urgency": "same_day | this_week | routine",
"next_step": "one short sentence describing the action",
"confidence": 0.0-1.0
}
If confidence < 0.7, set type to "other" and explain in next_step.
Fax content:
[paste OCR'd text]
What you get back. A structured ticket per fax. Drop the high-confidence buckets (refill, denial, CMN) directly into the right work queue. The low-confidence ones go to a human review pile that's now 80% smaller.
The watch-out. OCR quality is the floor. If your fax-to-text pipeline is bad, the model can't help you. Spend an afternoon on OCR before you spend a week on the prompt.
Time saved. A two-line auto-classifier on top of an existing fax queue typically eliminates one full FTE-hour per 100 faxes received. If you're at 300 faxes a day, that's the kind of math that pays for the project on day three.
03 — Medical Necessity & Prior-auth Drafting
The problem. Medical necessity narratives are the single longest-form artifact in the prior-auth packet. They take 15-30 minutes per request, they're written by people whose time is more valuable than that, and they're 80% the same paragraph every time — assembled around a different set of clinical facts.
The automation. Give the model the bullet-list facts and the payer's policy citation requirements. Ask for a draft narrative — not a final one — that the clinician can edit and sign.
You are drafting a medical necessity narrative for prior authorization.
Clinical facts (bullets):
- 67yo M, BMI 34, OSA confirmed by polysomnography 2026-02-14, AHI 31
- CPAP trial 30 days, compliance log shows 5.2 hr/night avg, residual AHI 6
- Documented pressure intolerance > 12 cm H2O
- Provider recommends BiPAP
Payer policy requirements to cite:
- AHI threshold met per CMS LCD L33800
- CPAP failure documented per Medicare DME MAC criteria
- Compliance period > 4 hr/night, 70% of nights — confirm or flag
Write a 2-paragraph narrative. Cite each clinical fact to a specific policy criterion. Flag any criterion not clearly met. Do NOT invent facts. End with: "Reviewed and signed by [provider] on [date]."
What you get back. A draft the clinician edits in 3 minutes instead of writing in 25. The structure forces citation per criterion — which is the exact pattern most denials hinge on.
The watch-out. Two rules, non-negotiable: a clinician signs every narrative before it goes out, and the model must not invent facts. A flagged criterion ("compliance period not clearly documented") is a feature — it tells the team to go pull the data before submission, not after the denial.
Time saved. Twenty minutes per submission. For a DME averaging 60 PAs a week, that's a full FTE-week per month back in the building.
How to start safely
Three rules cover most of the risk in a small DME or provider office:
- No real patient data into a personal account. Synthetic data while you're prototyping. A BAA-covered setup before anything live.
- A clinician or biller signs every output. The model drafts and triages. The human still owns the decision.
- One workflow at a time. Don't try to automate three things at once. Pick the queue that wakes someone up at night, ship it, measure it, then move to the next.
None of these three automations require AI talent on staff. They require an afternoon, a careful prompt, and a sign-off pattern your team already has.
If you want the 30-minute walkthrough on running Claude Code under a BAA — and the printable checklist your compliance contact will recognize — we're doing it live on Thursday, May 21. Free, recorded, and the slides go out either way.