147: What I Got Wrong (And Right!) About AI in VetMed: A Software Engineer Sets Me Straight. With Rohan Relan

In episode 141, I shared my take on where AI fits into veterinary medicine - and where it might take us next.
But I’m no expert.
So I brought one in.
In this episode, I sit down with Rohan Relan , Silicon Valley software engineer and founder of ScribbleVet ,an AI tool built for vets. We unpack the promise and pitfalls of AI in clinical practice and try to figure out how you can use it without losing sleep (or your license).
What we cover:
- Why AI is not just a "smarter Google”- and why that matters
- The risky shortcuts vets are already taking (and how to avoid them)
- How to increase trust and accuracy when using AI
- Where innovation collides with privacy, data, and consent
- DIY AI vs purpose-built tools
If you're curious about AI but not sure what’s hype and what’s helpful, this one’s for you.
Come find out what W(asabi)TF Wednesday is all about at our Japan Snow Conference
Check out our podcast for business owners, and Dr Sam’s ‘How To Make A Hell Of a Profit And Still Go To Heaven’ workshop.
Join our community of Vet Vault Nerds to lift your clinical game and get your groove back with our up-to-date, easy-to-consume clinical episodes at vvn.supercast.com.
Get help with your tricky cases in our Specialist Support Space.
Visit thevetvault.com for show notes and resources related to this episode.
Subscribe to our weekly newsletter here for Hubert's favourite clinical and non-clinical learnings from the week.
Topics and Timestamps
04:39 What I got wrong about AI
08:19 How models “know” things — next‐word prediction & reasoning
15:00 Grounding AI responses using external tools / search
19:19 Scribble Vet features
38:12 Privacy, ownership, and AI liability
51:00 Rohan’s podcast and AI tool recommendations
54:00 Key misconceptions & Caveats
Strategies to Enhance AI Reliability in Practice
- Provide trusted context (grounding): rather than depending on the model’s encyclopaedic “general knowledge,” supply it with domain-specific, vetted documents (textbook chapters, peer-reviewed articles, clinical notes). In retrieval-augmented systems, the model only refers to that subset of material when constructing its response. This tends to reduce hallucinations and increase answer reliability (though does not eliminate all error risks).
- Verify the sources: if the AI cites web materials, always check their legitimacy (journal, authorship, date). Be aware that LLMs sometimes invent citations (so-called “hallucinated references”), so it is crucial to check the sources it cites. . Also, the algorithmic ranking of search engines may prioritise visibility over scientific rigor.
- Follow the "Verification Rule": A good rule of thumb is to use AI for tasks where "doing the work it yourself is time consuming, but verifying is easy". This allows you to leverage the AI's speed without compromising patient safety.
- Give the AI an "Out": Give the AI permission (or requirement) to say “I don’t know”: many models are biased toward producing an answer even when ungrounded. You can incorporate in the prompt or system logic a fallback such as “If you cannot answer based on the provided documents or aren’t confident, explicitly say ‘I don’t know’.” More advanced systems may also use internal probability thresholds or confidence scoring to detect uncertain outputs and refuse to answer.
Reliable Use Cases in Veterinary Practice
- Medical note-taking (scribing) is often cited as one of the “sweet spot” use cases. The vet dictates or records the consultation, and the AI generates a draft clinical note (e.g. SOAP note). Because it is relatively easy for the vet to review and correct the draft immediately afterwards, this reduces documentation burden without sacrificing clinical oversight. The vet can then focus more fully on the client and patient during the appointment. However, the AI should be treated as a drafting assistant: review, correction, and oversight remain essential. Audio quality, multiple speakers, domain terms, and system integration (PIMS / EPR) are real-world constraints.
- Summarising patient histories is another strong use case. The AI can ingest extensive records (text notes, lab reports, imaging summaries, prior consults) and generate a concise, clinically usable summary. This is especially valuable in complex or chronic cases (e.g. dermatology, internal medicine) where reading all history is time-prohibitive. The veterinarian must still verify and cross-check, but the summary acts as a “map” to relevant areas. Better systems include hyperlinking (or reference pointers) to specific sections of the source record so the vet can quickly jump back and validate claims.
- Improving Client Communication: AI can be used to generate materials that augment the vet's work, such as creating a one-page infographic with medication checklists and treatment timelines from discharge instructions. This was not previously feasible and helps clients better adhere to treatment plans.
The Difference Between AI and Google Search
Google Search: A Retrieval Tool
- A search engine’s primary job is to retrieve documents that contain your search terms or are otherwise relevant to your query.
- How it works: If you search for “UTIs” in a veterinary textbook database, the engine returns a list of pages or documents where “UTI” appears. You must then open, read, and piece together the relevant insights yourself.
- Core limitation: A standard search does not itself synthesize information from multiple sources into a coherent answer — it points you to where the information exists, but does not generate new combinations or summaries.
AI (Large Language Models): A Synthesis and Generation Tool
- An LLM like ChatGPT is designed to synthesise and generate content as well as retrieve patterns implicitly from its training.
- How it works: The model is trained on vast text corpora (books, articles, websites) so it internalises patterns of language and relationships between concepts. When asked a question, it predicts the most plausible continuation (word by word) based on its internal representations.
- Core capability: Because it draws on patterns across its training data, it can integrate multiple perspectives or facts into entirely new text — e.g. summarising different authors’ views, generating a differential list, or proposing a plan that references multiple sources.
How AI and Search Can Work Together
- Modern AI systems often embed search or retrieval modules to combine the best of both worlds.
- AI uses search as a tool: When equipped with web access or a document store, the AI will first retrieve relevant passages (from the “outside world”) and then synthesise them in context.
- AI synthesises for you: The result is akin to a custom, one-page answer based on multiple sources — ideally with transparent citations or linkbacks to the original material.
A Critical Caveat: Source Reliability and Internal Transparency
- Dependence on search ranking / retrieval quality: Because AI retrieval modules (or search APIs) surface top-ranked documents, the quality of the final output is constrained by how well those retrieval steps perform. If the “best” documents are not actually the most credible, the synthesis may inherit bias or errors.
- Opaque internal logic: The AI may not reliably explain why it prioritises certain sources. If you ask “why did you cite source X over Y,” it may generate a plausible narrative (based on patterns it has seen), but not an accurate truth about its internal ranking mechanisms.
- Need for verification: Because the AI’s synthesis is only as good as the inputs (and its internal heuristics), it is essential for the user to review and verify the output. A high rank doesn’t guarantee scientific validity or clinical correctness.