Scott Chetham is the CEO of Faro Health, a company revolutionizing the design of clinical trials through intelligent, data-driven collaboration. A former clinical scientist from Australia turned CEO, Scott has spent decades inside the system he’s now helping to transform. Under his leadership, Faro Health is harnessing agentic AI to automate the most complex, time-consuming elements of drug development—shortening timelines, reducing costs, and ultimately making life-saving therapies accessible sooner.
>> Craig Gould: Scott Chetham, thank you so much for joining me today on the podcast. Scott, you’re the CEO of Faro Health. Faro Health is reimagining the design of clinical trials through a data driven platform that replaces static documents with intelligent collaborative workflows, helping life science teams run studies that are faster, more efficient and more patient centered. I’m really looking forward to our conversation because it sounds like this is a pain that really needs to be solved. But Scott, I love to start these conversations with one common question, which is, Scott, what are your memories of your first job?
>> Scott Chetham: Oh, wow. my first job, Keep in mind, I’m from Australia and I’m from a smaller town called the Gold coast, which is about an hour and a half south of the capital city, Brisbane in the state of Queensland. Very hot, just put it that way. I worked at their version of SeaWorld and I fried chips, what I would call fries here. I fried chips. And so some of my first memories are sweating into like the bats of fat in the middle of summer where it’s 110 degrees. I don’t eat fries almost ever by the way. Now in my entire life it was turn me off fries because it was so hot. And yeah, I basically worked in a production kitchen at 15 and so that’s when it was my first job.
>> Craig Gould: You learned an appreciation for your line of work now versus standing, over a deep fryer a little bit.
>> Scott Chetham: I mean I think it’s, you know, it’s been an interesting journey, to a CEO, because it was never my aim in life if you ask me if my wife laughs at me all the time because at, ah, parties when people ask, what do you do? I usually describe myself as a clinical scientist. So someone who designs clinical trials. And I’m a scientist at heart and always will be. She goes, no, you’re not. You’re the CEO of a startup company. But we’re not that much of a startup anymore. You’re a CEO of a company. so yeah, it’s definitely been an, I would say it was an unplanned and slightly unpredictable journey.
>> Craig Gould: So the way you answer that question, is that because of how you see yourself or do you not want to launch into a whole conversation about being a CEO and things? Like, I have another podcast where I talk to artists and you know, I ask them, you know, hey, how do you answer that question when somebody asks you, you know, what do you do for a living? And you say you’re an artist, you know, and they, they ask, you know, hey, tell me about your work. You know, there’s always this part there where they’re like, they. They try to assess how much does this person really know about art? Should I just kind of be really vague, or should I really go into the weeds and try to explain it to them? I mean, because, I mean, you’re. You’re in a highly technical field. Are you inclined to just try to keep it at the higher level when. When somebody asks you that question?
>> Scott Chetham: It’s probably two parts. There’s one of who I identify as or characteristics, and there’s what I do with it. And some of it’s just the legacy holdover from being that for over 20 years. And so, I will. Or, you know, I always tell like the same, I would say deep curiosity, and what I call the dis. In some ways, the discipline of scientific rigor I bring to the CEO role. It’s. It works very well when you’re the CEO of, you know, a company, but you just have to be very careful of blind spots because obviously commercial and other things that you have less experience in, so you have to rely on trust, a lot, you know, a lot more and create a. What I call, like a support network to fill in for where your experience just isn’t. So it’s a. which is often, I think, whereas a lot of people who gravitate to the CEO role, and I’ll be very clear, this is not a judgment call. This is just how, you know, people who got to that spot come from more of a business background. And I think there, you just bring a different lens to how you end, maybe a different style to how you kind of operate. I don’t think either is right or wrong. I think it’s. You just have to be very careful of, like, your blind spots.
>> Craig Gould: You found yourself in this place because you have been up to your eyeballs in clinical trials for a very long time. you know, for. For the listener that may not have a lot of knowledge about this space. Can you kind of set the landscape for where clinical trials have been, how arduous they are in terms of requirements and compliance, how, abusive some of the process is in terms of the amount of paperwork and the inefficiencies? I don’t mean to like, you know, make it a leading question, but can you talk about some of the pains that have been reality up to this point that maybe have. Has led you to try to solve?
>> Scott Chetham: Yeah. Can I. Can I frame it back at you slightly differently?
>> Craig Gould: Yeah, absolutely.
>> Scott Chetham: I’ll Explain why I think this. Everyone should really care about this problem. there’s a lot of. In my, this is my opinion, I think rightly or wrongly, pharmaceutical and biotech gets beaten up for the expensive drugs. But the question is, well, why are they so expensive? I’ll explain exactly why. It takes about 12 to 14 years to get through the clinical trial pathway to, to prove a drug is both safe and effective, it goes through multiple phases. You have to deal with multiple regulatory bodies, so reviewers. The FDA is the common one in the United States. The fact that this part alone is 12 to 14 years and $2 billion, if you want to invest in that as an investor, you need a good capital return. Additionally, the failure rate is close to 90%. So unfortunately. And it’s actually getting worse, not better. You think we’re actually getting worse every year and we can unpack, in a conversation why that is, because it’s actually well quantified. So now you’ve got this competing thing that if you’re going to bet on drug development, when you have a 90% chance of failure and it’s a 12 to 14 year process that costs $2 billion, your investors expect a solid return. It’s straight from finance. So this is why drugs are so expensive. So how do you fix that? The other, you know, the one that we’re focused on is you shrink the amount of time and cost it takes to run clinical trials and you compress it. because we can control a lot of the way we work. And we saw it happen in the pandemic. The COVID vaccines got brought to market in like one to two years. And how that was done was sheer brute force labor. It was. No one cut any corners. I would say the review period by the fda, they didn’t wait. So often things come in and it takes the FDA six to nine months to get to something. Well, in this case, it was obviously a priority. So some of the wait times just disappeared that you would normally suffer from, and waiting for calls. But basically it showed if you throw enough labor at something, you can compress it. you can compress the timelines and you can compress the development costs. Because in reality, that cost is a function of the time. Why do things take so long now when they never used to is because the science got more complicated. I think this is. I’m just. Arguably, it’s true. At least I get to. I’m lucky and I’m going to pivot for a second. I have a good knowledge of what’s coming out of a lot of the pharma and biotech companies because it’s my life. And there are some really exciting like breakthroughs that everyone’s going to benefit from that are in trials right now. And these are. It’s never been a better time to be in this field except for the fact that our how basically failure rate is increasing. And why? Well it’s because we collect more information than we ever did before and that collection when it’s manual has blown out the timelines and the cost and it’s actually increased the failure rate because the more data you collect, the more mistakes you’re going to make in transcription, the more mistakes you’re going to make in planning. because everything has to be documented. If you can’t document it, the FDA will tell you or EMA in Europe it didn’t happen. And there’s a very explicit way it has to be documented, it has to be attributable person sign it or E signed. So every single element, not just what we’re talking about like pages, sometimes it’s items on a page have to be attributable. So the, the burden of proof and the planning is enormous and the amount of work and documents this was just. And because we collect more, just got more and more and more and I would say the manual processes haven’t kept up. So hence why did FARO get created is with AI and agentic AI you can actually automate many m of these manual processes. So to be clear, AI is not taking away and doing the higher level thinking humans are doing. What it does exceptionally well is a lot of the brute force paperwork that takes weeks and weeks and weeks. So for example a protocol review today if we threw three or four experts at it. So protocol is what has to happen in a clinical trial step by step just tells you every single step what has to happen to every patient. How are you going to analyze the data, how are you going to report it? How it should be thought about to review that document by experts, four to five experts is about three weeks. We can do ah, probably a better review in about eight minutes and give you a deep analytics. And this is kind of an example of how you compress timelines. And that’s what’s coming. And the exciting piece is we will be able to do as we pull the timelines down, the cost of these things will go down and that will stop and that will be able to reduce the cost of pharmaceuticals or drugs on the market. It’s the only relievers we can Pull here because that’s what’s really driving the price. You have to get a return because every successful drug, we have to pay for the nine that failed for an industry. And so we’re not getting any better, in my opinion, at predicting biology. This is why we run clinical trials. There’s a lot of work in that field to select better targets to try and push this rate up. But in the end, we don’t know, we don’t really understand biology. So we have to run trials. And so you’re stuck in this gauntlet. You have to run through. So compress it and we’ll all benefit. But it’s a journey. I would say that all the pharma, all, everyone’s getting on board on this now because they’ve realized this is, this could, this could transform this industry and make it much more cost effective.
>> Craig Gould: I feel like there’s a subtext in there somewhere that we’re sort of overwhelmed by the amount of data we can get our hands on. And that for some of these people that are doing trials, if they get their hands on data, they’re inclined to throw it in to the records. In that some of these are getting rather bloated because of the amount of data that they have on their hands. And it just, instead of further justifying the calls, it’s just making the process longer and longer. Am I understanding that right?
>> Scott Chetham: Yeah, that’s, that is a highly contributing factor. in fact, Tufts, the, the big center there basically, just published a paper about, well, it’s in preprint, so it’s a few weeks old. And they did a systematic analysis of the top, I think it was 18 of the top 20 pharma companies and they’ve identified between 20 to 30% of the data collected in the trials is not basically supporting, the safety or efficacy or the commercial pathway of the drug. So we are over collecting a lot of data that burdens the patient or our participants. It burdens the hospital system, the science of how we do it and how we collect it. There’s a big push now to really help what I would say right size, these. Are we asking the right question at the right time? Because I can tell you there’s a lot of fear that I can’t go back and collect this again. So I tend to. There’s a tendency of people in my field to over collect, but there’s a big price associated with that. And so I think now that we know what it is and it’s quantifiable, there’s a big portion. This is one of the things Farah does is to help. Right size that I would say when.
>> Craig Gould: You offer help to pharma, in doing this process, what, what sort of friction do you encounter? I mean is it the, the particular formatting of one pharma company versus another? Is there a universal submission process or is everyone doing things their own way? What are some of the friction points that you have to overcome in terms of helping these people become more efficient?
>> Scott Chetham: It’s people, it’s. This sounds funny. There is I. We had a customer yesterday. I made the joke, they made the joke at me and I usually make it to everyone else. In reality there’s no special flowers in this field. We’re highly regulated. So we all have to do the same thing and everything is checked. So yes, there’s the way you can do it is a little bit, a little bit different. But the biggest obstacle is people and change management. If you’ve been doing something. We’ve been doing things the same way for 20 years. Not much has changed. We develop until us. Most people develop protocols in Microsoft Word. These massive programs, tens of millions of dollars, sometimes even up to $100 million were often put together in a table in Microsoft Word. It’s hard to change that embedded behavior. So it’s the biggest part is change management and bringing everyone in the company along on the journey. Because we have also. This is kind of the nature of many things. This is a field of highly specialized individuals and we all do one. you know, we do narrow things exceptionally well but the handoff is not always great. And also some decisions made in one area can cascade through others. There’s a lot of unintended consequences and that bringing everyone along on that journey is work and effort and that in my. That’s the real friction. In reality it’s not the technology works and you can build. We could. It’s never technology. It’s not a technology problem. It’s a people problem.
>> Craig Gould: So tell me about how your solution integrates into those existing templates or existing methodologies that people do. I mean if they’re hard coded to. I’ve always made a thousand page word document. That’s the way I’m going to continue to do it. How does Pharo come alongside and help make that a ah, better, more efficient, easier process?
>> Scott Chetham: Yeah, I think that’s the most. That’s why I think this is really an exciting time to be in because this is where agentic AI shines. Because previously I don’t know if this is the right way to think about it, but this is how I do previously, if you’re in software or, you know, software as a service. So traditional SaaS you had to basically build business processes into workflows like clickable screens. You had to put business rules into screens, buttons to click on. And so you had to take people out of the way they were doing things and pull them into your platform. And that, that’s, that is a lot of friction in doing that. Agentic AI is this nice thing that when you get it right, it runs alongside people using a lot of the existing things they use. So out, for example, our authoring system works inside Microsoft Word so you actually don’t have to, you don’t stop using what you already love doing. It’s smart enough to read your template. You don’t even have to do anything. It can pass through the templates you’re using, it works out what content needs to go where and starts writing for you. It’s the same for complex budget creation and can go straight into Microsoft Excel. So rather than having to build an entire software platform that does everything, you can build a lot of things in the background that work alongside people. I think that’s where I, think this first part of the AI revolution really is. It’s accelerating people, by working alongside them at least particularly in these areas where you have to have a human in the loop. This is high stakes stuff, but the work is often laborious and time consuming. The thing we focus on as a company is how do we automate these repetitive and incredibly like you need to know what you’re doing, but how can we automate that part of the task which frees you up to do the higher order thinking? And we do it in a way that we try not to disturb, what you’re doing. So a lot of our technology, I would say there is obviously screens and interfaces, that read and understand protocols and map them. But from there we try to use the existing infrastructure so we plug our systems and agents into what you’ve got. and we found that that’s a much, it’s a lot less friction than adopting a whole new thing.
>> Craig Gould: So does that mean the agents kind of popping up at the side making recommendations, making observations? Because I mean I feel like I’ve heard you speak before about your technology being capable of assessing the expected pass rate of the clinical trial. In terms of, given where we’re headed, this is likely not going to work within a timeframe or isn’t going to pass versus we’re getting more likely. Maybe we need to change this or that. How does that integrate? Is it popping up or is it m more embedded into places in their flow?
>> Scott Chetham: It depends. It’s that type of. If you are authoring and writing something, the suggestions, review and comments will appear inside Microsoft Word like a person would, but it’s a synthetic person. but if you’re in our design environment, it actually comes up like as an entire report and has analytical screens. So it depends on, as I said, kind of the nature of the work. we tried more than anything to try and fit within the tool set you’re already using, but there isn’t really like what we’d call a study, study designer before. So for study design, that is our unique SaaS experience, because we’re not replacing anything. There just wasn’t anything other than basically inserting a table in Microsoft Word. We look a little like that, honestly. But then there’s a lot of extra screens on analytics review functions and version control and sort of smarts under the hood about how that works. But then that core concept or source of truth maps into hundreds of downstream systems. And that’s when I would say the agents tend to take over and we then work alongside people. But the very start, I would say creating a sole source of truth that is like, that is a more traditional SaaS experience, but also because nothing’s quite existed before us in that space in.
>> Craig Gould: Terms of product market fit. I, mean, because it seems like you’ve got a really clear ROI here. Time is money. But you know, in this case, it’s not just time is money. Time is also, it’s also lives. Can you talk about how you quantify the savings or the impact on profitability when, when you’re talking to, pharma companies?
>> Scott Chetham: That’s actually a key, key part of how we sell. And in fact, I, before this one, we had a portfolio review call with the sales team. And, we’ve been working on an automated ROI tool, which is exciting because we’ll be able to tell customers in advance before they use this, actually even looking at the historical data, how much money they could have saved. So we’re pretty, I would say, scientifically rigorous about it. because we know the nice thing that we can do is because we know what your design, you basically will start working with us when you have some idea of what your trial will look like. So we have that case saved. Then as you make decisions and you work your way towards and we report opportunities to improve the design, we record all of that and then we, we frame it back at customers. It’s typically often for many studies, the order, the order of magnitude can be a small, you know, anywhere from 1.5 million up to tens of millions of dollars. So it’s a very compelling and very clear. This is just optimization, clear, ROI story. When you start to then go to these downstream process automations, it gets more fuzzy, because then you’re looking at time saved per employee. And that math is always interesting. in reality, what we tend to do is, well, if it’s time saved for programs, it’s on, you know, the way to calculate that is you take your peak sales day and every day that you push back, your peak sales day has lost revenue. And that’s what a day costs you in pharmaland. And so we tend to use those metrics for acceleration of labor. but we try to now as part of our growth story now is we work with all our customers to calculate the ROI from day one. And we make it very clear to them, what that is.
>> Craig Gould: How different is the process with FDA in the US versus getting the same drug approved in Europe? I’m sure that these pharma guys are used to submitting both approvals at the same time. But are they having to create two different decks or is it easy enough that they’re able to present the same information boilerplate to the other organization?
>> Scott Chetham: This gets very complicated very quickly. So I’m going to zoom way out. Dining at the, at the protocol level. We try to make one version for an international version, but usually you’ll have to tweak it per country. When I say tweak, it’s because some tests may not be available there. the local laws for age of consent for children is different. there’s subtle things that are just, or I would say particularly in oncology, standard of care is actually different. So you can’t run a comparative arm, that would, let’s say, let’s look like the United States. In China, you can’t use that. This is also why it’s difficult to submit data from China into the United States. It’s not just political, it’s just the standard of care is different. So how do you compare? You’re not comparing apples to apples. That’s the real problem. Europe, the standard of care is actually different. We try to harmonize this as much as we can, so that we can run a global protocol. from then the submission depends on the country and thankfully most, most members, there’s ah, a, ah, harmonized. I would say a lot of the countries are harmonized, in the way that we have to report the data and the way that we can electronically submit it. There’s a CDISK group that does the coding structure. Not every country and everyone is a member of that. So it’s not 100%, but it’s a lot of the major countries, that makes life easier. But again, this is where it can get very complicated very quickly. Let’s say for example, the Europe versus the United States. The United States has a central regulatory authority but dispersed payment system. So multiple payers, private insurers. Europe’s the inverse of us. They have distributed regulatory agencies that can assess things differently, but they have single pair countries. There’s a hidden catch. The type of evidence that a payer, let’s say in Germany would want to see versus what the FDA wants to see from a scientific review are not going to agree up front. Germany may want to see certain things around improvement of the life of the patient with, I would say, let’s just call them instruments surveyed in a way or instrumented in a way that represents true life in Germany versus I would say the United States. This is where it gets really thorny really quickly because you have to submit all of this data to everybody. And so it gets. This is where the nuance and the pain in some of this field really hits. And why we collect so much data is that sometimes it’s not just for safety and efficacy. We have to also prove that this thing is worth paying for and it’s worth this amount of money that just adds this layer on top. And personally I particularly find that is more weirdly complicated than actually the regulatory authorities. Yes, they’re a barrier, but to me it’s always been more well understood than the commercial barrier, to sell. My commercial colleagues may say the opposite, by the way. So maybe it just depends on the seat you sit in, but that’s always been my view.
>> Craig Gould: But it seems like a more highly subjective measure than what you could otherwise capture scientifically. Right.
>> Scott Chetham: We try to make it scientific, but sometimes it’s just not easy. payment reimbursement is a, it can just get thorny. And this is where you have to make sure you know, this is where representation matters. You can’t. And this is the other thing is, how will I say this? Given the current politics, you have to have, you know, you need to study your molecule in all the people that will be taking it. And that means that you will have to run certain types of programs to make sure you get, adequate representational diversity in your trial. And it’s just good science because there’s a lot of genetic variant. I mean, you can call, you can call people, you know, you can call things ethnicity, if you want to be scientific, maybe genetic variations. You have to make sure that you are covering the people that will handle your drug to make sure it’s safe and effective for everybody. And even the regulators know that and do that. And payers care about it because they represent everybody. And so it’s it just, there’s met. This is why it gets complicated very quickly. There’s many nuances and layers to these designs that have to take in all of these things. And we do work very hard as an industry to make sure now that we have adequate representation. Not just because it’s the right thing to do, because it’s also the scientifically correct thing to do.
>> Craig Gould: Can we talk about how you guys are utilizing LLMs and how you guys are, you know, customizing the different models to, to fit different functions within this process to guarantee the highest accuracy and efficacy of the data? Because I think everybody has fears that, well, you know, if, if we rely too much on AI, it will start hallucinating and maybe things that we thought were, you know, 100% accurate, you know, we didn’t realize data was getting massaged. Can, can you talk about some of the guardrails you guys have, have implemented there?
>> Scott Chetham: Yeah, it’s, it’s a great question and I get asked this by our customers endlessly for good reason. it comes about being that the model has to be fit for purpose. There’s a couple of reasons. There’s many reasons you can get a hallucination. So I’m just going to give some abstract reasons that, that will matter in this field and how you can control for them. One of them was a train on the right data. You can, you can control that to an extent, you can control it. You can take an existing commercial model or a smaller model, you can have your own curated data set and train it so you can fix that problem. That’s one way. The other question, like often when you see hallucinations, you’re asking a question that requires complex reasoning. So if you ask a question, I’m sitting in San Diego right now, how do I get to New York? There’s a number of ways you could go about getting there. And then you could add more constraints. Oh, I don’t want to take a flight between this time. I want to fly out of here. You are forcing this thing to do more and more complex reasoning. The more you point out, the more likely it is to hallucinate, the more likely it is it didn’t have data that you asked that question on. So how you can control for that is you don’t let it complex reason on its own. You force it into a what I would call structured reasoning. Structured reasoning stops that drift a lot because instead you’re saying, okay, if I have to go from San Diego to New York tomorrow, you can force it into a series of steps or asking questions that force a logical pattern and it will stop it drifting. It’s like, because one of the things you could do is like you must take a flight, you must take a car to get to the airport. First you must say you can force it removes by forcing it into. Instead of complex high order reasoning by forcing into smaller controlled steps one, you can validate it because small steps you can actually build automated tests like is this thing performing correctly on this small thing? But you can still build an incredibly complex solution to a problem this way. And that’s one of the. And then because you can build guardrails on these tiny steps really well, because you could say, okay, I’ve got to get to the airport, we can’t go buy a boat from my house. but you can go via car. I could call an Uber, I could call a Lyft. So you have to build constraints and guardrails and you have to really control the complex reasoning because that’s where it really goes wrong. But when it’s forced into like smaller, more what I could say constrained steps, you get very reproducible results like to the point that it’s incredibly reproducible. So that’s some of the trick with agents is that there’s some other tricks as well that I’m not going to give away. But ah, a lot of it is around controlling the reasoning process itself. And when you work in a highly regulated area like ours where you have very highly complex work, but the work has to happen in a certain way, it lends itself very well to this where I would say more creative avenues, this would be hard, it would be impossible to implement this and it would be impossible to implement on a consumer grade thing where you have like so many degrees of freedom to the solution. We want narrow, tightly controlled work product. And so we’re in some ways quite lucky that this field just lends itself to what we do really well just because of the nature of it.
>> Craig Gould: You know, it sounds like the, the system’s ability to you know, reason and measure the, the reasonable nature of what you’re planning for the trial has, has been a real benefit. I’ve heard you talk before about this example where a pharma was looking at doing a pediatric cancer trial and the system was able to kind of raise its hand and say there’s no way that you’re going to be able to get a child to do what’s being asked in this study.
>> Scott Chetham: Yeah, that’s correct. I mean it’s really interesting. We have a customer. I’ve got many examples. We had a recent customer where it was fascinating where the trial was in. again, ah, ah, adolescent CNS indication. I can’t really give more detail than that. there was a contentious part of this. There was actually many they ended up removing but apparently they admitted after because it was a pilot of the system. There was a thing in their Tanner staging. So how far through are you throughout puberty. And it was asked many times and it’s interesting because the system correctly pointed out by the way, one, this actually won’t affect the outcome in any way. This where they are in for Tanner staging isn’t matter. Two is you’re collecting it multiple times. It’s not going to change through this study. It’s only this long. and the third is your five competitors never measured it and one got regulatory approval. that analysis and the ability to do that is actually an incredible amount of what people don’t realize is the controlled reasoning that allows that to be done. There’s about 34 agents sitting under the hood, something like that to be able to do something like that. That company admitted afterwards everything that it pointed out. They argued with again and again and again internally and couldn’t get to an agreement. when they actually saw none of the competitors did it, they finally agreed. Oh yeah, we’re getting rid of that, that, that. But we’ve got a lot of examples where yeah, you can do incredible. It’s funny but you can do incredibly complex tasks and reasoning through very tightly controlled steps. and data acquisition and for it to reason across. And I think that’s the, that we found that that’s the trick. These things off the shelf or a general model if you asked, ah, like a broad question too. It’s too broad. You have to force the reasoning in a lot in the way a human would reason. I know they look like I talk to one all the time. I have a chat GPT Instance that’s been open forever and we just have conversations with it, actually have a conversations with it on my leadership style and get commentary all the time. It’s actually very interesting. I think it’s got more insight into me than I do at times. But, it’s amazing how when you, you can do very complicated things with tiny little reasoning steps and they just build on each other and they build a, you know, they build a data landscape up and then you can reason across it very effectively. but you have to be able to think. And this is the trick in this space. You have to hire teams who can think this way. And that’s the real. If you want to know, it’s hard in a company like this. It’s having people with AI expertise have a meeting with the minds of people with domain expertise and truly collaborate. Not like, oh, we’ll just talk or we’ll do this, but I mean, have a true meeting of the minds and work through how to solve very complicated problems in a step for things no one’s ever done before. that’s what’s really hard. And it’s a cultural thing and it’s constant reinforcement and, how we’re structured. But that’s the real trick, I think in this space, before we, started.
>> Craig Gould: Recording, you were talking about the demands on your time given the global nature of the business and just how early it requires you to wake up. And, you know, you’re just talking about creating teams that really are capable of going back and forth in their skill sets. How do you help your team navigate in such a highly regulated, highly innovated environment without burning out?
>> Scott Chetham: That is a great question. And it’s the risk. And I think I can tell you that our board, it’s the biggest risk we identify internally because we run pretty fast. And it’s my nature. and I have to always realize a lot of this might be a little bit of a segue. And I’ll come back. I think a lot of the leadership, the leadership style you bring in is often a reflection of your hard wiring. And I’m one of these people who is wired. I’m all about the journey. I enjoy the journey. I very rarely celebrate a destination. And one of the things I’ve had to learn is that because for me, if I win something, I tick a box and it’s like, okay, next thing. And I just keep going and going and going. It’s just, it’s fundamental to who I am. But the flip side is very few people are wired exactly that way. And so I have to purposely make myself pause, and celebrate wins. Or that’s closed. And make a purple, like, and transmit that for the team so they know. Okay. And celebrate a win. We just did it yesterday for something. Sorry, Wednesday, it’s Friday, for something. And that was a tiny achievement for me because I’m like, finally, I’m getting into this pattern of celebrating the team now when we achieve like certain milestones. Because otherwise I think you will burn out. And I think that’s part of it is like letting people, letting some of that natural eb and rhythm flow through. Because I mean, I1 I said I have a few blind spots. Well, I don’t have them personally. I just keep going. it’s good and bad, depending on the situation.
>> Craig Gould: How do you find good people? What do you look for? Yeah, I mean, beyond skill sets. How do you create a team?
>> Scott Chetham: That’s a. Yeah. Another good question. I think it’s, maybe it’d be an unsatisfying answer. I’ve personally found, like through this journey of faro, that this is. I’m first time CEO. I like deeply curious people because one of the things that, you know, I keep learning is we’re solving net new problems and you have to love the problem. So we have to, we have to recruit people, you know, at least in leadership functions and other people who do team leads that just are deeply curious and ask a lot of questions, and then. And challenge people on those assumptions, but in a, you know, in a professional way, not a combative way, apart from, you know, just straight up intellectual curiosity. more than ever I have started to value one, you know, depending on the role is just intellectual capacity. But the other one is, is you have to be able to communicate and clarity of thought. And I’m almost at the point that, I would get people. I haven’t done this, but have people write something just to show that they can organize their thoughts in a clear, in a clear process. Because ultimately for success in these complex processes, your ability to communicate that and then bring people along on that journey. And that’s where I see people struggle, is that ability to people who cannot clearly communicate that clarity of thought and how, you know, their reasoning, they will struggle, I think they’ll struggle in this new world with AI and other pieces, frankly, because it’s kind of what I’m seeing is the thing that separates people who are adopting or adapting very quickly with those who don’t. And the other one is you got to Be adaptable. Can’t have a fixed mindset. You’ve got to, if there’s new data, you need to kind of unanchor from what you were previously thinking.
>> Craig Gould: Well, it seems like the adaptability would go hand in hand with that person. That’s highly curious. But I guess, I would ask you, how can you determine whether somebody is really curious other than them just telling you, hey, I’m a curious person. Like, how are you capable of measuring that?
>> Scott Chetham: I’m giving away my tricks. if anyone interviews with me, they’ll, they’ll now know. I, I fish around, I start asking, what was the last book you read? I don’t really care what you read. What I care is that you, what I’m really picking at is what has had you in, what got you interested recently that you were reading about it and you were trying to learn. Like when did you learn something new? Depending on the name, like the person, the question will change. But what I’m really getting around, I’m asking around is when’s the last time like you went down a rabbit hole? even superficially, it could be about anything. The thing doesn’t, I don’t think the thing really matters. Someone could, you know, I have a buddy who’s going down a rabbit hole right now on how to make, basically make alcohol in Portugal. But it’s, if I was interviewing him, it’s like it’s a, it’s someone who is deeply curious, is researching something out of, you know, intellectual curiosity and is going to, is going to solve the problem because it has to be solved. And I think that’s just in a company that solves problems for our customers, that’s really important and it matters even for, you know, our customer facing teams, you know, they have a lot of them come from consulting backgrounds because I think that you can teach some of this. I think many of the consulting firms teach that clarity of thought and how to articulate that problem statements and clearly, clearly frame problems back for people. And so yeah, I, you indirectly ask about it.
>> Craig Gould: What’s on the horizon? Like how, how do you see the landscape for, for pharma trials? How do you foresee Pharaoh as a part of that? Five, ten years down the line?
>> Scott Chetham: I think we’re on the track now. I think that previously all of our, I would say the sole source of truth for everything was the written word. And all knowledge is locked in Microsoft Word documents, put inside vaults, and locked up. What I see, you know, what we’re empowering for is that the move instead to be able to turn that into data and actionable data and then from that you can automate the production of more documents or even program entire systems automatically. So the journey we’re on is to come that operating system for pharma is that this the clinical development pathway instead of being a very manual, burdensome written pathway. What I sincerely hope and it’s happening right now is that, and there’s a lot of momentum behind it is that this becomes an entirely data based feel. We produce artifacts from documents from data and people move to what I would call like higher order thinking versus pushing documents and reading them and writing another specification or programming assisting from programming assistant problem. And that’s happening. We’re partnered with VIVA traditionally building these big electronic data capture systems. It’s just web forms how to collect data from patients for trials. These EDC systems take six weeks, eight weeks to program. That’s now like substantially automated. and we partner with VIVA who builds one of the big EDC systems to do that and have shown there’s videos on their website of that now process being automated. And I think that’s where we’re heading and what I would love to see happen and for the industry is that our timelines get compressed so that we see a reduction in cost and then this will cascade through to a reduction in pricing.
>> Craig Gould: Is that wishful thinking? Big American businesses typically if, if there is cost savings that doesn’t necessarily always result in lower prices, it sometimes results in bigger profits. And so I guess we’re, we’re hoping that, that there would be a pass through, correct?
>> Scott Chetham: Yes. And I, there is already but it’s more indirect, it’s less so right now you know there’s a certain percentage of cut, you know, a company will reinvest in R D for more, you know, you know, more shots on table. So that percentage. Right. Let’s say it was 30. If I have more money or profit then I am going to invest more and more in R D, which I think is you know, the win for everyone. Yes, shareholders get a, ah, a cut. But 30%, you know, we’re around 30 to be honest. So even that 30 an increase, if that 30% gets bigger then we’re all going to win anyway because it’s just more shots on goal and in some ways it feeds back because the more shots on goal for the same amount of money, your likelihood of success goes up a little bit more. So it grows a bit more. So I think that’s How I like to think about tends to be. I know pharma tends to be an easy target for these types of things, but having worked in this industry for a very long time, it is most people like me who went into this field did it to help people. And I know there’s, you know, these are big corporations and there’s large amounts of money underneath it. But in the end, if you unpack the culture and you talk to the actual people at work, that they’re there because they want to make a difference. That’s the reason this industry really exists. I know it’s. We’re an easy target because we do charge a lot of money, but we don’t really, really have a choice. I’ll be honest either. and also, if you look at the profit margin, it’s not always that great.
>> Craig Gould: Well, Scott, one last question, and that is, can you give me your best surfing analogy for what you do?
>> Scott Chetham: Oh, man, that’s. I had not. That is not a question. I would predict I’m going to answer it a little bit differently. I’ve got that trend, if you haven’t noticed. I think the thing that I took a long time to learn in surfing, and I actually think it’s true in business, is that if you want to get better, you need to go for more waves. And the more waves you go for and you try even if you fail, the better you get. So I spent a long time focusing on wave count and measuring in an hour, because, who knows? I’m scientists, I like to quantify things. So how many waves can I try to catch in an hour and am I getting better at it? And lo and behold, you get better. And I think it’s exactly the same thing in business. You just have to. You can put all these elaborate plans and you can sit around, you can wait for the perfect wave, and you may or may not get it. And there’s going to be other people paddling for that wave. And through no fault of your own, someone might be better positioned for you. Better positioned. And they’ll have right away, and they’ll get it. The only way to get better at this is to just keep basically getting your wave count up per hour, and then you’ve got more shots on goal. And I think it’s the same in business is you just have to keep, you have to iterate quickly and you got to get more shots on goal, which is more like. Now I changed analogies.
>> Craig Gould: Okay.
>> Scott Chetham: More waves per hour.
>> Craig Gould: Yeah. Scott, I really appreciate your time. You’re in the middle of something that is, revolutionizing and is, just, you know, one more example of how, where we’re utilizing AI in a way that really is making a difference and really creating value. because, you know, that’s one of the questions a lot of people have had lately is like, yeah, we’re, we’re utilizing, we’re, we’re integrating people are throwing AI on company names that really aren’t even AI, but, like, where’s, you know, where are you making a difference? Where, where is it actually making an impact to the bottom line? And it’s exciting to talk to someone like you that, you know, the ROI is just really so obvious in how AI is enabling that. And so I’ve really had a pleasure speaking with you for the last hour.
>> Scott Chetham: Oh, well, thank you for having me as a guest.
>> Craig Gould: Absolutely.