Sept. 28, 2025

147: What I Got Wrong (And Right!) About AI in VetMed: A Software Engineer Sets Me Straight. With Rohan Relan

147: What I Got Wrong (And Right!) About AI in VetMed: A Software Engineer Sets Me Straight. With Rohan Relan

In episode 141, I shared my take on where AI fits into veterinary medicine - and where it might take us next. 

But I’m no expert.

So I brought one in.

 

In this episode, I sit down with Rohan Relan , Silicon Valley software engineer and founder of ScribbleVet ,an AI tool built for vets. We unpack the promise and pitfalls of AI in clinical practice and try to figure out how you can use it without losing sleep (or your license).

What we cover:

  • Why AI is not just a "smarter Google”- and why that matters
  • The risky shortcuts vets are already taking (and how to avoid them)
  • How to increase trust and accuracy when using AI
  • Where innovation collides with privacy, data, and consent
  • DIY AI vs purpose-built tools 

 

If you're curious about AI but not sure what’s hype and what’s helpful, this one’s for you.

 

Come find out what W(asabi)TF Wednesday is all about at our Japan Snow Conference

Check out our podcast for business owners, and Dr Sam’s ‘How To Make A Hell Of a Profit And Still Go To Heaven’ workshop.

Join our community of Vet Vault Nerds to lift your clinical game and get your groove back with our up-to-date, easy-to-consume clinical episodes at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠vvn.supercast.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

Get help with your tricky cases in our ⁠⁠⁠⁠⁠⁠⁠⁠Specialist Support Space.⁠⁠⁠⁠⁠⁠⁠⁠

V⁠⁠isit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠thevetvault.com⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠for show notes and resources related to this episode.

⁠Subscribe to our weekly newsletter⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠ ⁠for Hubert's favourite clinical and non-clinical learnings from the week.

 

Topics and Timestamps

04:39 What I got wrong about AI

08:19 How models “know” things — next‐word prediction & reasoning

15:00 Grounding AI responses using external tools / search

19:19 Scribble Vet features

38:12 Privacy, ownership, and AI liability

51:00 Rohan’s podcast and AI tool recommendations

54:00 Key misconceptions & Caveats

 

Strategies to Enhance AI Reliability in Practice

You can significantly increase the reliability of LLM's by using it in a structured and critical way:
  • Provide trusted context (grounding): rather than depending on the model’s encyclopaedic “general knowledge,” supply it with domain-specific, vetted documents (textbook chapters, peer-reviewed articles, clinical notes). In retrieval-augmented systems, the model only refers to that subset of material when constructing its response. This tends to reduce hallucinations and increase answer reliability (though does not eliminate all error risks).
  • Verify the sources: if the AI cites web materials, always check their legitimacy (journal, authorship, date). Be aware that LLMs sometimes invent citations (so-called “hallucinated references”), so it is crucial to check the sources it cites. . Also, the algorithmic ranking of search engines may prioritise visibility over scientific rigor.
  • Follow the "Verification Rule": A good rule of thumb is to use AI for tasks where "doing the work it yourself is time consuming, but verifying is easy". This allows you to leverage the AI's speed without compromising patient safety.
  • Give the AI an "Out": Give the AI permission (or requirement) to say “I don’t know”: many models are biased toward producing an answer even when ungrounded. You can incorporate in the prompt or system logic a fallback such as “If you cannot answer based on the provided documents or aren’t confident, explicitly say ‘I don’t know’.” More advanced systems may also use internal probability thresholds or confidence scoring to detect uncertain outputs and refuse to answer.

Reliable Use Cases in Veterinary Practice

Based on the principles above, the sources identify several highly reliable applications for AI in a clinical setting:
  • Medical note-taking (scribing) is often cited as one of the “sweet spot” use cases. The vet dictates or records the consultation, and the AI generates a draft clinical note (e.g. SOAP note). Because it is relatively easy for the vet to review and correct the draft immediately afterwards, this reduces documentation burden without sacrificing clinical oversight. The vet can then focus more fully on the client and patient during the appointment. However, the AI should be treated as a drafting assistant: review, correction, and oversight remain essential. Audio quality, multiple speakers, domain terms, and system integration (PIMS / EPR) are real-world constraints.
  • Summarising patient histories is another strong use case. The AI can ingest extensive records (text notes, lab reports, imaging summaries, prior consults) and generate a concise, clinically usable summary. This is especially valuable in complex or chronic cases (e.g. dermatology, internal medicine) where reading all history is time-prohibitive. The veterinarian must still verify and cross-check, but the summary acts as a “map” to relevant areas. Better systems include hyperlinking (or reference pointers) to specific sections of the source record so the vet can quickly jump back and validate claims.
  • Improving Client Communication: AI can be used to generate materials that augment the vet's work, such as creating a one-page infographic with medication checklists and treatment timelines from discharge instructions. This was not previously feasible and helps clients better adhere to treatment plans.
Finally, it is important to remember that the field of AI is evolving at a "breakneck" pace. A limitation that makes a tool unreliable today may be solved in a few months, so it is worth periodically re-evaluating what AI can do for your practice.

The Difference Between AI and Google Search

The fundamental difference between an AI like ChatGPT and a traditional Google search is retrieval versus synthesis. While modern tools often combine the two, their core roles remain distinct.

Google Search: A Retrieval Tool

  • A search engine’s primary job is to retrieve documents that contain your search terms or are otherwise relevant to your query.
  • How it works: If you search for “UTIs” in a veterinary textbook database, the engine returns a list of pages or documents where “UTI” appears. You must then open, read, and piece together the relevant insights yourself.
  • Core limitation: A standard search does not itself synthesize information from multiple sources into a coherent answer — it points you to where the information exists, but does not generate new combinations or summaries.

AI (Large Language Models): A Synthesis and Generation Tool

  • An LLM like ChatGPT is designed to synthesise and generate content as well as retrieve patterns implicitly from its training.
  • How it works: The model is trained on vast text corpora (books, articles, websites) so it internalises patterns of language and relationships between concepts. When asked a question, it predicts the most plausible continuation (word by word) based on its internal representations.
  • Core capability: Because it draws on patterns across its training data, it can integrate multiple perspectives or facts into entirely new text — e.g. summarising different authors’ views, generating a differential list, or proposing a plan that references multiple sources.

How AI and Search Can Work Together

  • Modern AI systems often embed search or retrieval modules to combine the best of both worlds.
  • AI uses search as a tool: When equipped with web access or a document store, the AI will first retrieve relevant passages (from the “outside world”) and then synthesise them in context.
  • AI synthesises for you: The result is akin to a custom, one-page answer based on multiple sources — ideally with transparent citations or linkbacks to the original material.

A Critical Caveat: Source Reliability and Internal Transparency

  • Dependence on search ranking / retrieval quality: Because AI retrieval modules (or search APIs) surface top-ranked documents, the quality of the final output is constrained by how well those retrieval steps perform. If the “best” documents are not actually the most credible, the synthesis may inherit bias or errors.
  • Opaque internal logic: The AI may not reliably explain why it prioritises certain sources. If you ask “why did you cite source X over Y,” it may generate a plausible narrative (based on patterns it has seen), but not an accurate truth about its internal ranking mechanisms.
  • Need for verification: Because the AI’s synthesis is only as good as the inputs (and its internal heuristics), it is essential for the user to review and verify the output. A high rank doesn’t guarantee scientific validity or clinical correctness.

 

Let me just start this one by clarifying something.No, The Vet Vault is not going to become a podcast, but new tech and AI in vet science.I do realise that I've made quite a few episodes on this topic now, but this is after all the podcast where we talk about the things that can make you a better vet and make your vet career better.
And in this moment in time, the thing that has a massive potential to help you do just that is AI.As you can tell, I'm a little bit obsessed now.I've learned that having a podcast is an awesome tool if you want the opportunity to speak to smart people about the stuff they know about and that you want to learn about.
So I'm leveraging The Vet Vault so that we can learn together about AI and what it can mean for us in the vet profession, and I'm loving learning about it.I hope you do too.I'm Ebert Hemstra, and you're listening to The Vet Vault, where we try to get past the hype and into the facts to help us make the most of this wonderful career of ours.
And in this episode, I'm speaking to Rowan Reelin, a software engineer with his roots in Silicon Valley and with a background in computer science from UC Berkeley.Rowan spent over a decade at the bleeding edge of tech, including building a startup that was acquired by Google, where he was first immersed in the world of AI.
But it was his dog Potato, a Mexican St.Rescue with a knack for racking up veterinary specialties who led him into the vet world and to the creation of Scribble Vet, an AI powered veterinary medical note taking and documentation platform.I like to say that she kind of was put here to speed run me through the specialties.
So we've done everything from primary care surgery, she's gone through oncology, she's gone through rehab, internal medicine.So we've had quite a few that was going to the vet basically once a month for five years.Now if you listen to episode 141, you would have heard some of my own thoughts on the topic and where I see us heading with AI in Bed Med.
But in the making of that episode, I realized that there was some fundamental gaps in my knowledge of AI.So I wanted some intelligent feedback on what I had said in the episode and someone who could help us with some of my own unanswered questions.Enter Rowan.In this conversation, I asked him to tell me what I got wrong, what I misunderstood, and what I missed.
Rowan also helps me get clarity on my biggest AI related question as it relates to my job as a vet.How much can we trust it?Like how does it actually know stuff?And how can I use it to make me better at my job without putting my patients at risk?Big question right now.
Just a quick note before we jump in for our Vets on Tour update, I'm still kind of buzzing about I wanna get vets on to a conference from August, which was the most fun that I've had in a long time and definitely the most fun that I've ever had at a learning event.We are now all systems go for our next one which is in Japan in February 2026.
Rooms are filling fast and we've just published our academic program, which I have to brag about just a little bit because I think we've built a really cool learning experience.We have 3 speakers, a surgeon, a radiologist and a criticalist, and instead of each of them doing their own little silo talks, we are getting them to combine their unique expertise to complement each other on shared topics.
If you're one of our clinical subscribers, you'll already know Doctor Bronwyn Fullega, our surgeon, and Doctor Rob Webster, a criticalist with a real gift to make even the most complicated topic really digestible.And I'm excited to meet and interview Doctor Rachel Pollard, our our image Magician.
Now, I was having fun building this program, and I may have gotten a bit carried away with the concept of theme days combined with Japanese food.You be the judge.But here's what we've got.But here's what we've got.Torsion and tempura Tuesday.What the fuck?And wasabi Wednesday, which is all about challenging and confusing cases.
Fuba F ups and fish Friday.So mistakes, complications, seriously injured.So by Saturday, so all about those patients that make your adrenaline level spike.And secrets and Sake Sunday where our guests are sharing protips and secret insights from their own experience.
And then we're wrapping up with a quiz night that will be based purely on the contents of the stuff that we've learned during the week with Sake.That sounds like fun and I promise it will be.You should come and join us in Ozawa Onsen in the snow from 23 Feb to the 2nd of March 2026.
Details and tickets are at Vets on tour.com.OK, let's get back into AI with Rowan Reeden.So I'm a fairly it's a heavy experimenter with all things AI, both in my clinical work and outside, as I said and and did.
Did you listen to my monologue?I got.I did.I did pick that up from your podcast, yeah.So I want to ask what did I get wrong or misunderstand?Did you listen to me and go, Oh, no, no, no, no, no.No, it was maybe one thing which I, I think you did qualify, but I wanted to kind of emphasize 1 area where I personally sort of balk at using AI, just kind of having a sense of what the failure modes are.
So I think in that podcast you mentioned using it to check your work for dosages and conversions.So whether that's weight conversions or unit conversions or yeah, just dosage conversion, things like that.And I think intuitively one might expect that, oh, it's a computer.
You know, computers are obviously great at calculating.So this thing should be a great calculator effectively.And it's just, it's nice to have a natural language calculator.But it turns out, and you can sort of try this experiment if you go to ChatGPT and you use a few different models and let's say you use a G PT4-O, which is a little bit more of the flagship model and then GPT 4.1 mini.
And you ask it a question like what's larger 9.9 or 9.11?It'll get it wrong half the time it'll get it wrong.It's you would think like, you know, it's literally a greater than sign.How can I mess that up?And it's because the way these things are trained, they're not calculators.They actually can have the same sets of biases that a human might have.
So you know, when I say 9.11 or 9.9, which is bigger?Like if you think version numbers, you might think 9.11 is actually bigger than a version number than 9.9.So maybe that's where that bias comes from.Or you know, 11 is bigger than 9.But yeah, that that is kind of a failure mode that to be careful of.
So I tend not to use it as a calculator because it might work 99% of the time and then there's something that confuses it or, you know, maybe that decimal version is not working as in that particular example.I think it can work really well, but it has these sort of calculation failure modes.
I'm laughing because that explains something.The stuff I've made recently, we, we're hosting a conference and I was working out pricing for the tickets and we wanted to do discounted tickets.And then I, I was doing all the planning in JDPT and, and I said, OK, So what would the 30% discount mean?
And I just, I, to my shame, I didn't check it.I just assumed because it's such a simple get and I put it out there afterwards somebody said that was a big discount.I was like, oh shit, now, now I understand.OK.So it's not so, So what I was meaning in the in my analysis of what I like it for is for as you said, unit conversion.
So let's say some things in the states will be listed as millimoles per liter and then it will come to grams per deciliter, some stupid conversion, there was always a pain.So even those if I say, OK, can you convert 10 of that unit to this unit?
Is it going to stop with that?So, so I think what it, what I, what I would be OK with is like if you say how many centimeters are there in an inch, right, that just 2.5, I believe the answer is 2.54.That 2.54 will always be, I think you know, 99.999% of the time it'll be right.
If you try ask for like a precise thing, like how many centimeters are there in 3.785 inches, I wouldn't necessarily trust it for that.Now there's a good chance I'll get it right.I mean at least a certain approximation.And that's because if you look at how these things are trained, they're not trained to have a circuit within themselves necessarily, or at least not one that we easily identify.
That is literally a calculator circuit.They're trained to predict what the next word is that, you know, they basically you basically scrape all of the data off the Internet and you just start predicting starting from, you know, word one, what is the next word?And you get really good at predicting that.
And somehow magically, that tends to encode a lot of knowledge into the system, including the knowledge of, you know, there is 2.54 centimeters in an inch.I hope I'm getting that conversion right because I'm repeating that in one's head.But you can sort of see like, you know, I'm trying to remember and you're sort of trying to extract from its memory potentially what that conversion is.
And then maybe it gets trained to learn a circuit that does multiplication, but you don't really know how good that circuit is.What are the bounds of that circuit?Like does it fail at a certain decimal?Does it fail at a certain size number?And then again, does it have these biases built in that are a little confusing, like why it can't learn?
9.11 is less than 9.9.So yeah, I would say when you're trying to have it do math, it's funnily enough a computer that's really not great at math.The one exception to that is a lot of these models, now they have this capability called tool use, where they can use external tools.
And effectively, you know, you can make one of those tools a programming language, right?You can say use a programming environment to calculate this answer.And then it'll actually go and write some code that says, OK, is 9.9 greater than, you know, 9.11, yes, true or false, and return the answer that way, or what is 3.75 * 2.24.
And you're basically letting it use a calculator the same way that a person would.And in that case, that answer can be reliable, right?So there you can inspect the inputs, you can expect exactly what the calculation that happened was and you can say, Yep, that's the math that I wanted to do.And I can trust that the output is therefore correct because it literally went and used effectively a calculator.
So when you, when you enable the tool use, you get this capability of getting a more guaranteed result for the computer for the calculation.So these models already understand how to use kind of a generic tool.So you can say, hey, here's a tool, it's called a calculator.I mean, it probably already knows what a calculator is, but we have a tool we're naming a calculator.
And as inputs, it takes numbers in an operation and it'll give you the output and it can actually understand how to use that tool correctly.And so I think with ChatGPT, there's an option for tools at the bottom.And you can enable certain tools like you can enable search as a tool.
So if you wanted to search for you, it knows how to use essentially a search API in order to go and get information and read that information and then return information to you.Something I struggled with in making that episodes and I think something that I and A lot of us inherently misunderstand or failed to fully grasp is how does it know stuff?
How does it public?So changeability or Claude or whoever, because you said that they just, and I, I have heard this before that they basically just predict what is the next logical word in this.And then you said and magically that makes them know stuff because I, I, I go and specifically listeners here.
Well, I'm interested in using it as a clinical decision making tool or a knowledge Bank of some sort.I can't keep up with all the stuff when I wanted to be able to.I want to ask it about a case or a condition and I want reliable answers.How does it work?How does it know stuff?Yeah, that's so.
So there is some magic there and that I don't know that anyone has a good answers to this question yet.And there's an entire field of research which is all about trying to understand how does a language model like Chachi PT, how does it know something, How does it learn something?But we can look at kind of if we, if we just allow for some magic in some of these places, we can kind of get some hints as to how it happens.
So let's start with how they're trained, which really kind of answers the question of how they know.And then what are the varying levels of knowledge that a language model like Chachi PT has?So at the basic level, they have training where you take this network, you basically grab essentially all the data off the Internet and then as you said, you just start to try to predict, OK, starting from word 1, what's word 2, starting from words one and two, what's word 3 and so on.
So you're just kind of constantly trying to predict the next word following.Just imagine sticking all the words in the Internet together and then you know as as documents and then trying to predict the next word 1 after another.So it it predicts because in all of that data that's been trained on it goes most commonly the next word in this sentence would be this.
According to the, the next thought is this.The specific objective it's trained for is it's gonna, it's gonna predict the probability of.So take all the possible words it could be and it's gonna say like 1% probability that it's elephant, 5% probability that it's this.
And the objective that it's trained for is really trying to get the highest probability for the word that it actually did show up next.Now if you think about it, if I say, let's say it gets really good at that and this is kind of the magic part.How does it get really good at that?It's just kind of trained to do that, and somehow it converges to being really good at that.
So if we just kind of accept the magic of that moment, when I say the capital of France is to be able to correctly predict the next word there, you have to have somewhere encoded the knowledge that the capital of France is Paris.So in order to get the probability of Paris to be high enough, you really have to kind of know that the capital of France is Paris, right?
Or if I say I went to France and then I went to the capital and I found blank and you're going to, you know, put in the the name of the city beautiful.You really have to know that the capital is Paris.Otherwise, you know, you just you're going to do really bad at predicting the next word correctly.So that's that's kind of knowledge.
There's a similar step you could make for reasoning, which is, imagine you have a mystery novel and it's one of those Sherlock Holmes novels.And, you know, all of the information is really there somewhere in the novel about who who did it.And at the end, the person says, you know, and the the perpetrator was.
And if you're going to predict that next word correctly, you kind of have to be able to look at all of the logical steps that a person might make to get to that answer and make those logical leaps yourself and then get that answer right.So in some sense, this sort of magic of predicting the next word gives you some semblance of knowledge and some semblance of reasoning and logic built into it so that that's where that knowledge comes from.
And a couple things in there should have made you think like, maybe that's not super reliable.Like it's a probability distribution, right?I'm predicting 40% or let's say 80% chance or 90% chance is Paris, but there was .1% chance.It's elephant, right?How did how did that happen?
But on the flip side, isn't that kind of how you've been thinking works?It's also that I know that the capital of France is Paris because I've heard it so many times that my mind goes, yeah, I'm 99% sure every time I've heard it, the two associated.
The word is Paris, but I could randomly go it's Budapest because I just have a a brain fart or.Are you sure it's not Leon?Like, you know.Yeah, exactly.So it kind of makes sense in a sense.
That description of how it knows stuff doesn't particularly scare me.It just means it's not a, it's not an encyclopedia or a reference thing.It is like a human.It just has access to vastly more information that it can draw.It's read the whole Internet, so it has more knowledge than a person would.
And I think, I think for me that I find to be a good mental model.That's why I kind of talk about maybe human biases getting encoded in there as well, right?That 9.9 versus 9.11 is this is a similar sort of mistake that a human might make.Maybe it's seen that mistake on the Internet a lot.So it does a similar thing.
We don't really know why it makes those kinds of mistakes when it does that.We don't have a good way to interrogate, but that is kind of a partial answer to the question because that's the sort of raw trained model in some sense.You can then layer things on top of that to try and increase its factuality.
So there's a step you might do and we can think about this as what you might do as a human, which is, let's say I ask you, OK, what's the capital of France?And by the way, here is world map and it lists out all of the cities, all of the capitals, everything somewhere in here.
And you're welcome to browse it.So now what I've done in here is given you the context of here's a world map, feel free to look at it and then answer the question.And now your likelihood of answering the question goes way up, right?Because you have this in front of you, you can go and look at it.So you can do a similar thing with chat TPT, right?
You could imagine saying, OK, I'm going to ask for differentials on this case.But before asking the question, I'm going to go and grab, let's say, all of the textbooks in the world.I'm going to go and grab all of the NBC proceedings, all the peer reviewed papers.I'm going to just dump them in and say, here's all this information now answer this question.
You know, what are potential differentials for this problem?And it, it can probably do a much better job if it's able to reference all of that information before answering the question.And then you have this added benefit where you could say, don't only just reference that information.Please give me a, a citation, like, please give me the paragraph that you think this is coming from.
From which book did this come from?And because it has it in the context, it might not just make it up as you've probably, as you, I think you mentioned in that podcast that you've seen it just kind of make up a citation.It might not just make, it might actually go and look at it.And so you can kind of build systems around this core construct of predicting the next word that can help increase factuality.
You're still in a probabilistic environment, so you don't get 100% guarantees, but you can get closer and closer to better results.Quick break here to tell you about what's happening in the Vet Vault universe.So you know that I make clinical podcasts and you know that we have a specialist support space for when you need an actual human specialist to help you with a tricky case.
If you didn't know this, go to thevetville.com.It's all on there.But you may not know yet that I'm also producing a podcast specifically for vet practice owners with Doctor Sam Bowden from the Accelerate Practice Academy.So if you're an owner and you're sick of being a slave to your business and you'd love to use your practice to, as Sam puts it, have fun, have an impact, and make money, look for the Veterinary Business Accelerator in your podcast player of choice.
I promise you, you won't regret it.I'm actually really proud of what we're making there and we are getting really good feedback from other clinic owners.Sam also told me this week about a workshop that is hosting.It's a combination hybrid slash in person event in New Zealand on the 9th of October and it's called How to Make a Hell of a Profit and Still Go to Heaven.
If that title makes you squirm a little bit, then you are probably exactly the person who should attend.I've put a link to the event in the show description wherever you're listening to this.OK, back to Roman explaining how AI is different from a Google search.It was something else that I struggle with concept and I slip into thinking that this is what it is, but that it's just a very smart Google search.
What's the difference?Because a Google search goes and searches the Internet for me.But LLM it doesn't just go and search, it has a built in knowledge.Am I am I right?Like what's the differentiation between a Google search and I go and ask HPT something or?
Yeah, so, so Google searching, I think just let's go back to that version of this, kind of scope it down to something that's a little easier to conceptualize because when I think of the whole Internet, it becomes kind of tricky to conceptualize what we're talking about.So you have all the veterinary textbooks in front of you and the Google search is saying, OK, I'm looking for UTI's.
Well, here is all the places where the word kind of UTI appears.And, you know, it's on page 15 in this book and page 30 in this book and 300 in this book.And then you can go and read those pages and extract the information that you want out of it.Until recently, until Google started integrating Gemini into their Google searches, what you couldn't ask Google to do was say, hey, synthesize that information and give me a summary of all of the author's thoughts on, you know, potential causes of UTI's, right?
So I think they're kind of fundamentally different things.You actually can use search to inform the LLM.So going back to that example of let's just put all of the information in there.Why don't we do that for everything?And part of the problem is that right now ChatGPT just has a limit on how much text.
And if you've been using it for a while, which I think you have, you may remember that back in the day, you used to really not be able to put much at all in there before it kind of forget everything that you had said earlier.And they've extended that more and more, but it's still not big enough to put all of the Internet in there.It's still not big enough to put even all of the veterinary textbooks in there.
So what you can do is you can say, let me go do a search.Let me find all of the references to UTI.Let me just grab those paragraphs.And then what a language model can do that a search can't do is kind of read, browse, and then synthesize that information into something new like a summary or synthesize that information into a differential or say, hey, you know, this one probably doesn't make sense because we're not seeing these other causes either based on the information that I have.
So I search can often feed in to a language model, but search is just ultimately retrieval.It's not synthesis.What language models can do is actually synthesis and generation, which is interesting.OK, so when I use perplexity or the Internet search button on HPT or something, that's what it's doing.
Then it's going, OK, I'll take what you've asked me.I will go do a, it's called a Google search.That's the the synonym.And then go and read those things and then say, OK, here's what I've got.Exactly.And then that becomes super powerful, right?Because it's, it's both grounded in some source that you can potentially go and read and make sure that the synthesis actually made sense and it's able to sort of synthesize into something that's more rockable.
I sort of think of it almost as you're getting the ideal search result.It's like if Google always returned to you one page, but it happened to be exactly the page that you were looking for, right?Whenever you asked a question.I have the following case.And, and these are, you know, these are the signs and what are potential differentials.
And it happens to be that there's a page on the Internet that answers exactly that question.It's, it's almost like that's what's happening, but that page is being synthesized and then just referencing the other pages that it sort of pulled in to answer that question.But exactly right, you're doing the search and then you're synthesizing based on that.So I asked this question to I was trying to get it answered by by somebody smart and and then I talked about it on that podcast is when I do a search like that.
My concern is how does it decide where to search?And I asked GPT to tell me and it gave me the perfect answer.It says, well, I prioritize universities and so that's exactly what I want to hear.But again, I don't know is that is it just telling me what I want to hear?Because.
I if I say give me the protocol to treat diabetes, I don't want it to go and search some metropath website or something like that.This was actually the other thing I, I wanted to point out from that conversation was actually these models are not actually good at interrogating themselves.
It's sort of like asking yourself like, how do you do, how do you drive, you know, or how do you, when you catch a ball, how do you catch the ball?Are you calculating the derivative while the ball is in the air?And you know, like you don't know.I mean, you might be, but you don't really know.And so models actually, they don't really know what's going on inside their own heads.
Now, what they might be able to tell you is what people they may have trained on what people wrote about the models before.And so they may be reading, you know, the Chachi PT paper and then saying, OK, this is actually what happened or someone because you can also prompt them.
Like ChatGPT itself has a prompt that you can't necessarily see that open AI wrote.And it can regurgitate that prompt and tell you, OK, this is what I was told to do in that prompt.So in that sense, it might be accurate, but it can't really interrogate itself.Like if you ask it, how many parameters do you have, it may not even know, which is kind of interesting.
But again, similar to humans in terms of how it prioritizes, at least part of the answer is going to be it's going to let Google sort of do that prioritization, right?Because what it can't do or what, whichever search engine it's using to back it, what it can't do is necessarily pull in, you know, all 6 billion pages that might correspond to your search result and read them all.
So the ranking is fundamentally going to be done by the search engine to some degree.And then it is interesting, there have been some experiments that people have done where they've shown that language models to some degree can identify good content versus bad content.And that might look like content that is linked to more often, that therefore has higher authority, that might look like content that just aligns with other factual content that has been rated more highly before, more closely.
So it starts to say, yeah, Wikipedia seems like a good source, but this other one doesn't seem, this one just seems like spam.But fundamentally, it is a real problem that you don't know that it's actually reading good sources.And it's probably very much worth checking the sources that it lists, which is nice when it does the search that you can see those and saying, yeah, do I actually trust this?
Because it may be that search just ranks highly on Google search, but it's really not a great a great Sir.OK, so then that takes us to the other thing I discussed is making your own tools.So the examples I gave there, so either on on the 10 GBT website, there's the my GBT where I can give it a set of data and say, hey, for these sort of questions, use this so I can make my own.
I mentioned diabetes and my own diabetes bot or assistant of some sort where I can say here's all my notes and a couple of textbook chapters that I trust and stuff I learned at a conference and Chuck it in there.So same thing with the Google L&M notebook.This is what you use for your data.
Are those reliable in doing just that?Is that what you were describing earlier?Every time you ask it, it's going to skim through that and go and pull out the relevant bits and give you an answer.Yeah, I will say you're probably more of a power user of those tools than I am based on what I heard in that podcast.But it certainly increases the reliability because it is going to reference that and have it in context and you can sort of push the boundaries.
If you at the end of the day, currently these systems are still probabilistic, right?They will still like if you ask it a question that is outside of the scope of the documents you gave it, it might still answer it.It might not go and say I can't find that information in there, so I'm not going to answer.
It might just go and answer the question and then you're kind of off.You don't know if that's reliable or not or where that came from necessarily.So I would say it certainly increases the reliability.This is a very active area of research and development for these companies.So they're trying harder and harder to help with grounding and make sure that the answers are grounded in the information you give it.
But today there's no guarantee.And given the sort of higher risk environment that veterinarians work with, it's definitely worth cross checking.They're also the way that they're trained.I tend to anthropomorphize these a little bit, but they make some sort of eager to please.So one useful thing is to do the thing that you did, which is to give it an out.
Make sure that you tell it to say, if you don't see it, say, I don't know, because otherwise it might just give you the answer anyway because it's so eager to please and sort of answer the question that you've given it, that if you haven't given it and out, it doesn't do it.But I think you touched on a really key point there in terms of how we think about how to use AI.
And I think a good general rule as it is today, I think the place it can be most helpful is where doing the work it yourself is time consuming.But verifying is easy, right?So these places where it would take you a long time to go and look through all of your episodes and find the episode that had this answer But once if it came back and said, hey, here's the answer and this is the episode, it actually doesn't take you very long to jump into that episode and verify that.
Actually, yeah.So that combination is really good.Very cool.So Ryan, when when you guys reached out after listening to my episode, you mentioned you guys are building something or developing something or have something in addition to just the medical record keeping.
What are you working on and what have you got?Yeah.So we've now built a few additional tools.And so for example, in addition to describing and incidentally, just scribing is one of those great use cases where it takes a long time to write the note, but it's easy to verify, right?So you, you can walk out of the room, you can look at the note, it's all fresh in your mind.
And you can be like, yeah, is that right or not?Right.So it kind of hit the sweet spot of use cases, which is why I think it it took off so quickly.We also now do document summaries.So for example, you can upload hundreds of pages of PDFs and get a summary of all the medical records out.
So if you're getting a referral and you can just upload those and get a really nice summary out.And then again, you have to think about how does that get verified?So for us, the verification of that is with the client, maybe not always 100% the most reliable verifier, but certainly, you know, the best one that we can get on, you know, if you're, if you don't want to spend hours and hours cruising the records.
Quick interruption here.I touched base with Rowan this week and he told me about a cool new feature that has come online since we recorded this episode.The history summarizer now includes a reference link to where it got the information that it used in the summary from.In other words, if it mentions a fact from the history, you can now click on a link that will take you to the relevant section of that history so you can double check it.
Really nifty.The other types of things that we're really interested in is helping improve client communication and so helping people go home with these artifacts.So we automatically generate an e-mail that they can take home.But I thought the one that you would find interesting is we also generate an entire infographic.
So it's sort of like having a graphic designer in the clinic who looks at the discharge instructions and says, all right, how can I simplify this down into one page infographic that has checklists of daily medications and treatment timelines and a few different bullet points?And this is the kind of thing that I get excited about because it just wasn't possible before.
AII mean it it you know, this isn't taking someone like there was never a graphic designer in every clinic.This was never taking someone's job or something like that.It's it's literally creating something that just wasn't possible.And now everyone can go home with a discharge instructions that they can put on their fridge and check off the medication planner every day and things like that.
So, yeah, those are those are the types of things that we're really interested in doing.A lot of ideas like this.Again, it's not exactly as you said, it's not about replacement, it's just about augmenting and making better because that example that you use and it's so interesting that you guys have worked in the functionality of going through or summarizing patient records.
I reasonably step back into general practice from emergency practice.And I don't know if a potato of one of potatoes problems is a skin problem, but skin problems are.It's a big thing in general practice.And I'm suddenly back in general practice seeing all these skin cases and most of them have a lifetime of skin history and I'm brand new in the clinic and I'm like, I need to catch up.
I'm seeing potato and potatoes coming in and she's itchy again, but she's had five years of skin problems.So I'm sitting there for half an hour before my visit trying to scroll through and find the the relevant history.So I've started exporting full history, dropping it into attachment and saying can you look for the relevant skin history and summarize what have we done, where we're up to?
And then I'm going to ask questions and say, has anybody discussed immunotherapy?Has she had any adverse reactions to drug X or something like that?And it's normal.It's so cool.So is that the what you're building that into?Yeah.So that's built into school.And then, you know, for US1, one of the guiding principles we have is we try not to build things that like the example you gave of, hey, if you can just use ChatGPT to do it, we'll just let people use ChatGPT to do it.
We don't need to reinvent the wheel.So we only build things.And I, and I think it funnily enough, I think that solution is eventually going to become part of, you know, Adobe Acrobat and your PDF reader and whatever else, right?The AI is going to be everywhere.You're going to be looking at a PDF and you can just type in like summarize this for me and boom, it's going to pop out.
So we only build these things at least as one of the guiding principles when we think that Chachi PT is not specialized enough to do what we specifically want to do.So I'll give you an example of that.The way that Summarizer works in Scribble is you create the summary and let's say you did it, you know, the night before your appointment and you walk in the room.
The next day when you start up Scribble and you start an appointment for that patient, you're going to see a summary of that summary so that you can refresh yourself on what all of the key points from that summary were and just, you know, be up to speed again.And then when you record that appointment and describe it, that's going to get added to those histories so that the next time you see potato, you know what you talked about last time and you can say, hey, so we did the cyto point injection last time, like did that work out?
Did she stop scratching?You know, things like that.So for us, it's about how do you create that kind of integrated experience so that that's just it's ambient, it's not active, it's the background.That's the integration because that's what I'm finding.And again, I'm one of those early adapters.I want this to work so I will have 4 windows open on my computer and copy from there and drop it in there.
But I want it all just to be in one place so that it all happens automatically.I don't have to leave the site.Is that are you guys integrated with some software or not?Yeah, we have Chrome extensions that basically feel pretty close to native integrations with a lot of the cloud based pens.
And then we are building out integrations, API level integrations with some of the other ones.But the Chrome extensions are very popular and you know, but it's early adopters that ultimately, you know, they give us the start that we need to make sure that we actually had a product and got good feedback.And then we did see that once we released our mobile apps, once we released our website or growth rate, you know, had a very distinct kink in it to those things came out because, yeah, for mass adoption of these tools.
You're gonna remove the friction?Yeah.Can't be, yeah, just seven different tools kind of stitched together.It has to feel like an integrated experience.And I think the Pims integration is really interesting because it saves you this time of copy paste that you would otherwise have to do.
But one of the key things that we found was when people first talk to us, they're really interested in the pens integration because they're like, OK, I know I'm going to do the note.Then I know I'm going to copy paste.And like, how do I eliminate the copy paste step?But I think what ends up happening is once you see what AI can do for you, then we start to get the flood of other things like, hey, you saved me an hour and a half on my notes.
Can you also save me, you know, in the document summary case, the 30 minutes that I'm going to spend reading these documents, right?Like forget the seconds doing the copy paste, save me the 30 minutes on generating the summary.So for us, it's been really exciting to see, you know, once people get exposed to AI, how would they start to imagine the possibilities of how it can solve problems for them?
And then you know, where the folks that you're where one of the folks at least that they tell about that, which is kind of interesting.You probably got a similar experience from your podcast where people were like, you know, testing out what you were talking about and then be like, oh, what about this and what about this and what about this?Right.Yeah.Yeah, exactly.I'm interested in that this might be like a the DIY approach versus out the box solutions.
I'm a DIYR, I'm a cheapskater.I try and save money when I can and there are things to be wary of because you in this space you're building tools using, I don't know, do you guys use open AI or use multiple LLMS or whatever versus me going, well, I was going to make my own little tool within choose your large language model of preference.
Is there a risk in that?Like could I stuff it up?And again, I'm obviously not going to make anything that's going to make life saving decisions or critical dosages or things like that, but administrative things that that make my life easier.Yeah.I would say it's hard to answer that question generally because it's it's, you know, it depends on your so, so you mentioned temperature, right.
So what that immediately tells me is, is you kind of know what you're doing when it comes to allowing how much creativity versus not.But if, if temperature wasn't meaningful to you and you were using the tool to, let's say, summarize a document, right, and your temperature was high, which is allowing creativity into the process.
And you don't realize that that what that's what it's doing.You know, you're injecting in randomness into a process that really shouldn't have it.So it's hard to answer that question generally because it kind of does depend on your level of sort of comfort with what all of these things are and what they do.And if you're pretty comfortable, then I don't see any OK.
I think that people should, people should be doing that because it gives them a really good intuitive sense of what works well and what doesn't work well and where you can trust it and where you shouldn't.And then our job is of course to to make something that is so convenient and easy to use that you would rather do that than sort of patch them together.But that said, there's no sort of intrinsic benefit to that because we are fundamentally using the same foundational models.
The one place where that's not true is the transcription side where we actually train our own models because we found that the ones that are available publicly just are not good enough for specifically veterinary terms for for some reason.I think there's a lot of like human medicine data out there for people to train on, but not as much veterinary data out there for people to train on.
So we've seen that a lot of these models just don't do as well on veterinary terms as we would expect them to.So that's what we've we've sort of focused in on as not working great in that mid.I can back that up purely because the my editing software that I use for the podcast, I think they use Whisper, which is the open AI's transcription thing.
And when I do my clinical podcast, the transcripts are often quite comedic.It's quite funny reading what how it interpret some of them, how to transcribe some of the medical terms.Maybe we should give you because we our thing is actually a fine-tuned version of whispers.So we have trained Whisperer specifically on veterinary conversation.
So maybe if you if we should try, if you want to send us a recording, we can try transcribing it and see how it compares.That would be kind of an interesting experiment.I will.I'll definitely I'll send you some audio files if you can play with that big concern.I found that when I spoke to because I helped to integrate some of the stuff and group that I work for a specifically again, note taking stuff.
I was looking at different options and one of the big concerns from management is always privacy.What about privacy?What about patient privacy?Answer to that and not just, you know, yes, the note taking, but you're also talking about summarizing.Are we giving that patient record then to the cloud?
You know, is that information going out there for all of AI to train on?How do we address privacy issues?So I can't, again necessarily speak generally to all the tools.I can kind of speak to what we're doing and then what I know of what open AI and those folks do.So let me just start on the open AI front.
Typically when you use their API, their API policy is we do not train on the data that you use via the API.That would be true for Open AI.For Anthropic, you know, that's not true across the board.So you know, you may not get the same guarantee if you use the DeepSeek API or something like that.But at least for Open AI and Anthropic, that's the case that they.
Entropic is clawed, right?Yeah, yeah, exactly.And same with Google, they won't train on that data.They will, you know, store it for 30 days if to see if you're abusing it or doing something bad with it or something like that.But they'll delete it if you, if you're a larger customer like we are, then you cannot even get them to stop storing it for that 30 days.
When you use ChatGPT, you don't have that guarantee.So they, I believe, I don't think they've changed this policy.I may be wrong, but I believe they will train on the data or they reserve the right to train on that data unless you specifically turn that control off.So that's just one thing to be aware of is that you may want to go in your settings and turn that control off if you don't want that data to be used for training.
So that's kind of for the public tools.With Scribble, we have these policies around what we will and won't do.But at a high level, the transcription model is trained from user data, but we have very strict controls around how we might do that.So we say no more than a 32nd clip from any individual recording will be used to train the transcription model.
And if it contains any PII of any sort, we won't use it, right.So that's one way that we enable limiting what gets put in there and it's in transcription model.So it doesn't really have the ability to regurgitate that data in the same way the large language model might have that ability to regurgitate the data.
Then of course we are using the API's.We have zero data retention, which is that higher level of constraint where they don't even store it for abuse purposes.We have that turned on as well.So they can't train on that data.When that goes out and it is sitting in the cloud, we delete the recordings after 90 days.
So we try to make sure that these privacy policies are respected and the GDPR and things like that are also respected and kind of minimize the use for it.I think a big part of the way to look at it is like what's the business model?I mean, we charge vets, veterinarians give us money and that's it.That kind of rounds out the business model.
So we don't have a whole lot of other stuff that we really want to do with it other than try to provide a good service to.Yeah, I must admit I always struggle to understand the concern.I totally get it in human made, but I was like, So what if the Internet knows that a dog called Potato has had knee surgery?
I'm not putting the clients because we usually when I use the the medical transcription, I'm not putting client phone number and e-mail address and name and Sinem and I just it's just we're talking about.I suppose in in the conversation they might say something private, but I don't.This is exactly why we treat the recording as something that we want to delete after 90 days, but not necessarily the note which we let stay in your account indefinitely.
You can ask us to delete it is because in the recording someone might talk about their personal health problems.They might talk about and you know they often do and they might talk about they might see their credit card number right.Someone in the background might be seeing a credit card number at the front desk.So that's why we treat the recording as extremely sensitive.
And we'll delete that after 90 days, but allow the notes to stay there because, as you said, you know, patient health information can be of less sensitivity.But there are still cases that we worry about, like people may just want not want someone to know that they had to do an economic use in Asia, right, That they didn't have the funds that might be sensitive.
So yeah.Fair.Enough.We still try to be pretty careful around that.OK.Any other pitfalls or things to watch out for?And I'm talking about general AI use in in veterinary practice because everybody's experimenting.We all, it's a wild race.We can try all sorts of things.
Anything else you come across that people misunderstand?Or.Should not do or be careful of at least.I think this one's more abstract, but at a general level, it's sort of, and this is really hard because, you know, I work in the industry, I spend all my time on it and it's still hard to keep up with the pace.
And I think in the abstract, I think one thing is to not assume that the problems of yesterday are still problems today.And to and to like hallucinations were a huge problem two years ago or a context length, how much you could put in there.You know, it was 4000 tokens would roughly correspond to 3000 words.
Now with Gemini, it's like a million tokens, 750,000 words.You can put a lot more.So I hesitate to tell people to like, try to keep up because it's moving so fast that it it really is.But, you know, dip your toes in every now and then and see if the problems still exist or, or just reevaluate what you think you might know about the space because everything I said today on this podcast could be completely out of date in in three months.
And I think that's the the trickiest thing.You know, people will say, oh, I didn't use it for this because it hallucinates search results on its own.And it's like, OK, yeah.But now you can actually ground it in the search results.You can use deep research and have it look at 300 search results and bring that information back to you.And these tools have changed and evolved.And it's worth, I think, given how powerful they are, trying to do some dipping your toes in on a regular basis to make sure they haven't actually solved the problem that you specifically had.
OK.So that's probably the biggest misconception that there's a staticness to it that currently it's just very, very dynamic.Yeah, for myself, and to some extent it gives me FOMO to the point of panic sometimes, knowing that there's so much that I probably could be doing.But I try to go on at least a monthly basis, but ideally weekly to go look at the Open AI website and Anthropic and stuff and just because they post their updates all the time.
You're you're not alone.Yeah.This is this is this is true.This is I literally was having a conversation 2 days ago.We came up with a plan and then literally the next day I, you know, I said, oh, well, Google just released something, let's throw out the plan and we have to try this new thing and see if that kind of just solved the problem.
So yeah, you're, you're not alone.The, the pace is just breakneck.It's very, very hard to keep up to date.You know, for us, it's a, it's a full time job.I mean, This is why we don't, we're not trying to build a pens.Like, there's just too much going on in the space for us to be kind of focused on anything else, basically.Because it's a very good point that you said.
Keep checking back in that problem that try to solve six months ago and you were now AI can't do it.Try again because of that example that I have of my show notes assistant.As soon as AI became a thing, I wanted to build that.I was like, yeah, I want to do that for the vet vault.
And I tried and I banged my head against that.I could not get it done because I saw a couple of other podcasts that as like, how do they do that?And I tried reaching out to developers and it just seemed not impossible, but beyond the scope of what I could do.And then I just did nothing for six months.Six months later, it was a very easy, not very easy, but and easy prices because suddenly something's released.
They're like that solves that problem done.Now I can do it.And I think you actually reminded me of another misconception, which is like because the tools are powerful that they're hard to use.And actually it's almost at the inverse is true.The tools are really powerful, but in some sense it's just a chat interface a lot of the time that can get you really far or notebook LM, which you you talked a lot about, you know, it's actually a relatively simple tool to use and it can generate a podcast for you.
I mean, it's, it's crazy that it can do.It don't, don't talk about that.I'll lose my job.Don't tell people about that, but.It's not a science experiment, right?I mean, it's not something that you have to be an engineer to use or know how to code, even though it is very powerful.So I think that's the other big misconception I see is that wow, this seems really powerful.
Therefore it must be difficult.And I think it's actually the opposite.It's so powerful that it's kind of wrapped back around to being easy because it can understand natural language and it can understand what your intent is, even if you're not necessarily the best at describing it.So I think that's another thing that's like really exciting about AI as a tool.
OK.All right.Outside of Scribble and Vet Life, what are your favorite AI tools for everyday life?Have you got any that you're loving at the moment?What I use, I really do like notebook LM and the podcast feature that must not be named quite a bit because what I like about it is I can, I can use it while walking potato to learn about something that I'm interested in.
Like, you know, there's a, a new research paper out and I can just dump it in there and then get a very enthusiastic conversation on the topic directly in my ear.I don't know if you've noticed this, but it's very, very enthusiastic when LLM talks.I think 1 area that I'm excited about, I don't think it's quite there yet is the conversational AI's that really sound conversational and, and sort of what are the possibilities there.
But about a month ago, there was a model called Sesame that came out that had it just, it spoke like a person and it is really good at intonation and speaking like a person.Like it sounds like you're talking to a person.And I had a 30 minute conversation with it.
It was a conversation asking about I was, I was just like riffing on, you know, quantum mechanics and having it explained to me how quantum mechanics works.And it was just more interesting to listen to a talk than it is to listen to the slightly robotic voices that you still get out of ChatGPT.
So I'm excited about these things as learning tools and I tend to use them a lot that way because I'm, I'm just very curious and, you know, there's a video that comes out on YouTube that sparks some interest.And now I want to have a conversation to understand that more.And it's a little bit like having, you know, a really smart scientist at my disposal to ask these questions of.
So I get excited by that.Kind of think if there's other ones.Automated transcription, I tend to use quite a bit, surprisingly, things like watching a YouTube video, it was hard to understand exactly what was said in that moment.And I can turn on the transcript and have the automatic subtitling kind of give me that answer.
But the built in one within YouTube.Yeah, exactly.At least it gets me close.It's not 100%, but at least it gets me a little bit closer to what the person was saying.And then I've used image Gen. to produce stickers.That's been another thing.And it's like now, now it's been, it's become really trivial with Chachi PT.
And yeah, you can, yeah.I just really like producing fun imagery and producing fun videos out of it that can just bring a little bit of delight into someone's day.You know, you can send someone a picture of their dog eating a pizza or something like that.And, and it's more fun.Or make little icons for a Slack emoji that are a little more fun than what we might normally have.
All right, let's wrap up.Is there anything I'm missing out on, anything that you feel like where you haven't covered?Well, let me before I forget to ask anywhere you want people to check out your stuff?Where do you want to send people?Yeah, so we're at scribblevet.com and if you just search for Scribble Vet, we should come up.And I definitely recommend if you're spending, you know, more than 3040 minutes on your notes every day, we'll, we'll cut that out.
And then even if you are pretty fast, hyper or efficient, I think one of the really neat things is that it's not just about saving time.It's also about being able to direct your attention.And instead of directing your attention back and forth between your clipboard and the person or the patient or your computer and the pet, you can just be focused entirely in the room and rest assured that the rest is kind of happening in the background.
I'm going to be diagnostic about which one I use, but I can say in terms of if you're not using something to do your notes for you, you really are missing out.It is game changer.I would not.I would struggle without it now if nothing else, if you're in a busy clinic because what happens?
I always have used to when I was still doing notes, I'd have all the best intentions of doing my console because I don't type while I do my consult.For that reason, I wanted to look at the client and look at the dog.I don't want to stand behind my computer and and type while they speak.So I would I would try at the end of the console to at least make a very brief summary of the key findings in the history, but it didn't always happen.
So then when I sit down for my lunch break, I'm like, let me do my notes.OK, so potato came in with an ear problem.Oh shit.Was it a lift or a right ear?I can't versus now that's all.At least if nothing else, I have that.I have all my information saved for me without me having to go and do anything.
Yeah.And did you did you remember that you suggested adding fish oil to the diet and putting that in the discharge instructions and things like that?I think one thing I should say before we wrap up is that, you know, everything that I've been talking about has been in one form of AI as well, which is the sort of large language model ChatGPT generative AI world.
And there's a whole other set of AI tools, you know that that kind of operate differently.So, so this doesn't this, this actually is kind of scoped to the large language model set, but there is things around radiology which actually have a very different model of operation that are more classification based or segmentation based and pulling information out.
And, and that is a sort of has a different set of constraints, has a different set of issues, has a different set of things that it doesn't have a problem with.So just keep in mind that AI is a kind of broad term.And what we've been talking about is specifically large language models, ChatGPT style AI.Thank you for clarifying.
I have somebody lined up to talk about AI, radiology and literary science, which is some misconceptions there.So we'll come, we'll dive back into that in a further episode.All right, you wrap up questions you're not gonna get away with that just did not just because you're not a vet podcast listener.
Are you a listener or are you making all of your ad?Are you making all of your ad podcasts now?With no I, I listen to a bunch too.I I walk potato and I listen to podcasts while I'm walking potatoes.So I get through a lot of content that way.I I like the Uncharted vet podcast.Funnily enough, that's the one that when I was getting into the space, I listened to 10/15/20 episodes just to kind of understand the space of problems that we're here.
There was one that was done about note taking, so that's how that started in my head as well.And then on there's an AI podcast that I really like, it's called the Dwarkesh Patel podcast.And it's really good for just learning about like kind of what's coming and how people are thinking.
And it's very forward thinking.You know, a lot of the folks there think that AGI is coming in the next 3-5 years.But it's interesting to get that perspective and hear how he gets a lot of the people from Cutting Edge Labs and to come and talk about what their work is and where they see the problem.
So, so I find that one really interesting conversations with Tyler is my sort of like make sure I'm not just solely listening to AI and vet Med stuff.So I like that one just for the well roundedness of all of the various topics and the the insightful questions that he tends to ask.
Yeah, that's Tyler Cohen.Yeah.Yeah.And then the acquired podcast.I'm a business history nerd, so I, I like hearing kind of all the business history of these various companies and all the turns they went through to get to where they are.And sometimes it can, it can provide some comfort as we are, you know, navigating the ups and downs.
And then recently I just listened to an audio book, which is the first time I've done a full audio book while kind of just listening in my Earpods.But that was the perfectionist.I think it's what it's called.And it's all about precision engineering.And the biggest take away is humans are really, really smart.And there's like a lot of human ingenuity that has just compounded to create this world that we're in.
And I found that fascinating to hear, you know, all these things that we take for granted and how it was one person climbing a very uphill battle to make this thing kind of true in the world.So I really love that book.You should listen to the second audio book should be when we fail to understand the world.
I think that's what it's called.Something along those lines, which is you just made me think of that because they they look at historical mass of scientific breakthroughs, But there's a slightly dark twist to it that we have these genius people developing these incredible things that then have horrendous impacts on the world.
But cool storytelling, cool writing and and good to listen to.So there you go pass along question.I've already actually asked this one, but the one that I have for my next guest is very, very specific.So it's not appropriate, but, but I'm going to ask you this because I think it'll be an interesting one.It is what is the number one misconception in your space, mistakes or error that people make that you would love to go away?
Good work to correct.Yeah, I'll, I'll define my space a little bit quickly.I think I'll define my space as being an engineer for a second because I I still think of myself as an engineer.Can we, can we clarify that?I I was going to do it at the beginning.When you say engineer, I think most vets think of building bridges and roads.Oh, no.
Computer science.Yeah, computer science.I, I was trained in electrical engineering, computer science, but I spend all my time as a programmer or used to spend all my time as a programmer.Yeah.So I'll, I'll sort of answer as a as a programmer and just thinking of that space because there's one that frustrates me here, which is there's actually this funny misconception that's kind of the opposite of everything we talking about.
If you go on the forums that a lot of programmers frequent, it's that AI is pretty useless, that it's all a hype bubble that actually it doesn't really help you.It's not good at writing code.It makes too many mistakes and and that conception.And at the same time, there's, you know, these companies that have created huge valuations by building developer tools.
So clearly people are getting a lot of value out of it.But I like to dispel this misconception because I have this image of a junior engineer showing up on these forums and saying, Oh, it's all hype.I don't need to spend my time on this.And then really doing themselves a major disservice because it is a really powerful tool.
And yes, there is a lot of hype, but it is a really powerful tool.It can really, really move the needle for a person as an engineer and, and pretty much, and I think every profession, it probably has some impact.I mean, I'm, I'm sure there's, there's some that it doesn't, but I think mostly you'll probably find some way to use this tool.
So funnily enough, I think the biggest misconception is that it's all hype and there's nothing to see here.And it'll all kind of go the way of crypto where it'll kind of fade out from consciousness over time is maybe what's what's happened a little bit recently.And I don't think that's true and that people really should start to get familiar sooner rather than later, because it's going to be very.
Very impactful.OK, question for my next guest from you.Yeah.So I was thinking just to keep on the AI theme, what part of your work day do you hope that AI automates and what part of your work day are you hoping that it never touches?
I like the second part of the question.I never thought about that.I think about the the first part all the time, but the latter I have not considered.Interesting.Cool.Ron, thank you so much for that.That was really insightful.Like I think you, I don't know, do you find that vets are technophobes in general?
I I find when I I'm excited about this stuff and it's interesting.People think I must be super techie.I'm not at all.I'm actually a bit of a I'm not a Luddite, but I didn't, you know, I'm in that weird generation that grew up without computers and then started getting into my life when I was about 16 and I've never been into it.
But now I am this stuff to me because it's just that it's a problem solving thing.I don't do it because I enjoy playing with AII want to solve problems and it does it for me consistently.So do you find is there is resistance in your client base to stake up these things?Yeah, there's, there's always resistance.
I think some of that just comes from the perception that it's hard to use rather than actually being hard to use.But I think actually the interesting thing is that it's actually the folks who like tech the least, to some degree, who get the most value out of at least for us.Because you know, you can either spend 45 minutes clicking around typing and if you're a technophobe, you may not be a fast typer.
You can either spend a lot more time in front of your computer, or you can use AI and actually spend a lot less time in front of your computer because it's automating a lot of that work for you.So actually, we kind of find the opposite that it's, it's actually the folks who want to spend less time in front of the computer, who really love scribble for the fact that it helps them do that.
And that's probably some of what you're seeing is that like you could has spent a long time building out the tools that you were building and and probably hated a good portion of it because you would be in the arcana of like, how does this programming language work and how do I set up this environment?But now you've been given this capability to solve your problems without spending three months trying to figure out stuff that you're really not that interested in.
And you could just go directly into the solving problem parts.It's actually.You make a very good point.I think that's it.It's using using screens to use less screens.Yeah, and you can solve the problem and then be done.You know, you're not like, I'm interested in programming languages.I like learning about them and things like that.But that's not for everybody.
And this enables you to not have to know any of the programming languages and you can still build a tool, you can even write code without using the programming language at all and get a custom made tool for yourself that does.So I think, I think a lot of these things are they're interesting because they're going to help us disconnect from technology a little bit more than we did before and just get straight to solving problems.
Thank you so much for your time and for the work you're doing and for helping us get out of work sooner.Yeah, I appreciate you.Appreciate you taking the time and and going over as well.But yeah, thanks so much.Before you disappear, I wanted to tell you about my weekly newsletter.
I speak to so many interesting people and learn so many new things while making the clinical podcast.So I thought I'd create a little summary each week of the stuff that stood out for me.We call it the Vet Vault 321 and it consists of three clinical pearls.These are three things that I've taken away from making the clinical podcast episodes, my light bulb moments.
Two other things.These could be quotes, links, movies, books, a podcast highlight, maybe even from my own podcast.Anything that I've come across outside of clinical vetting that I think that you might find interesting.And then one thing to think about, which is usually something that I'm pondering this week and that I'd like you to ponder with me.
If you'd like to get these in your inbox each week, then follow the newsletter link in the show description wherever you're listening.It's free and I'd like to think it's useful.OK, we'll see you next time.