2025-10-26 / 8 min / Research + Medical AI + LLMs
OCT Copilot Lab and medical AI caution
What an OCT scan LLM experiment taught me about boundaries, uncertainty, and interface language.
The OCT Copilot Lab was an experiment in using language models around retinal scan workflows. The technical idea was interesting, but the product framing mattered even more because medical-adjacent AI has a much lower tolerance for vague claims.
I had to think carefully about language. A model can summarize, compare, and assist with structured reasoning, but the interface should not imply diagnosis when the system is not validated for that role. This is where product copy becomes a safety mechanism, not decoration.
The experiment made uncertainty feel concrete. In many projects, uncertainty is a UX problem. In medical contexts, uncertainty is also an ethical and operational problem. The right response is not to make the model sound more confident; it is to show what the system used, what it did not know, and where a human expert belongs.
That lesson carries into my broader AI work. Strong AI products are not the ones that remove all boundaries. They are the ones that make boundaries legible enough for users to trust the system appropriately.
takeaways.
- Medical-adjacent interfaces need conservative language.
- Uncertainty should be visible and structured.
- AI assistance is strongest when the human role is explicit.