Stylized blue monochrome portrait of Stu Solomon with his name in bold block letters behind his and the Master Move logo in the corner

STU SOLOMON

Stu Solomon is the CEO of Human Security, a cybersecurity leader verifying more than 20 trillion digital interactions each week to defend against bots, fraud, and abuse. A former Air Force officer and attorney, Stu has built a career around mission and curiosity, applying both to protect the digital ecosystem. In the conversation, Stu discusses the future of agentic AI, the importance of trust in cybersecurity, and the leadership principles that guide him at the C-suite level.

HEARD ON THIS EPISODE:

Episode transcript

>> Craig Gould: Stu Solomon, thank you for joining me today on the podcast. Stu, you’re the CEO of Human Security, a cybersecurity company that protects organizations by disrupting bot attacks, digital fraud and abuse, and doing it at scale, verifying the humanity of more than 20 trillion digital interactions per week. That’s a huge undertaking. And, Stu, I want, I want to talk all about the unique work that you’re doing. But, Stu, I’d love to start these conversations with one common question, which is, Stu, what are your memories of your first job?

>> Stu Solomon: I was actually warned, Craig, that you would ask this. And as I thought about it, it’s not one, it’s multiple. And that’s indicative of my entire career and frankly, my personality. I can’t just focus on one thing at one time. So, I started out mowing lawns like anybody else would, but my real first job was being a newspaper boy and delivering Newsday across the streets of, Long island so that, everybody could consume the news. And my least favorite part of that job by far, I remember, because it was just awful, was going every Friday and collecting money for those papers, and I would get that paper out there every day, but I couldn’t stand knocking on the door and saying, please give me $3 for the week.

>> Craig Gould: It’s surprising, maybe it’s not surprising how many CEOs say that their first job was as a paperboy. I was just watching a video yesterday where, Apple’s Tim Cook was talking about his first job being a paperboy and, you know, getting up at 3 o’ clock in the morning and throwing papers and then going back and taking a nap before school. But, but collections. Yeah. I had one guest who was talking about his collections. Was in easier because he was delivering to the people in his giant, apartment complex. He would have to see these people every day, like on the elevator. And there was, like this social anxiety for them if they chose not to pay the kid in the building that was asking them for a buck and a half or whatever. Right. But what do you take from, from that experience? I mean, what, what lessons do you still carry with you?

>> Stu Solomon: Yes. So, I mean, honestly, the number one lesson I took is that although it was completely my responsibility, I was inadequate or incapable of doing that job alone. So my parents probably delivered more newspapers than I did over the course of the year that I did this. My sisters were stuffing the Sunday edition with coupons every Sunday morning. it was a family affair. It’s a family affair, which taught me a lot, which is you might be the front person. But, there’s a team around you that actually gets the job done.

>> Craig Gould: Did you pay your subcontractors?

>> Stu Solomon: Oh, certainly not. I did in fact grace them with my presence, but I don’t think that they found as much value in that as what I derived from them.

>> Craig Gould: Your career trajectory I find really interesting. And I don’t always have real chronological conversations with folks because we kind of dive back and forth, but you go to the Air Force Academy, you spend time flying in the Air Force, you wind up getting, a J.D. at Rutgers, you up end, and then you, you make some interesting industry switches. Your, your background’s pretty diverse. Can you kind of talk about maybe the, the touch points that, convinced you to go from one trajectory to another to the point that you, you have obviously found the place that utilizes all of your skill sets. But how did you get there?

>> Stu Solomon: Yeah, Craig, that’s such a cool question. I really appreciate it. If there’s a common theme throughout all of it, it all boils down to mission. Focusing on a mission to accomplish. And whether you’re wearing a uniform, whether you’re serving society, whether you’re building a business, whether you’re taking it to the bad guys. At the end of the day, it’s all about a singular mission that literally drives all of your decisions and all of your activities day to day. So in every single one of my career stops, that was a common theme. The other common theme was all about natural intellectual curiosity. Just absorbing and learning as much as possible. I’ll give you an example. I’ve made my way to law school. I don’t know that I ever really wanted to be a lawyer. I tried. it wasn’t really my cup of tea, but what it did is it challenged me intellectually. And sitting there and learning and just absorbing all these different perspectives and points of view allowed me to look at things through a different critical lens than the traditional science and engineering mindset that I had, grown up in my undergraduate career. So it became an intellectual complement to the way that I attacked problems and problem solving. And if I look at my career, those are the two things that continuously drive me. It’s one, mission and two, this constant pursuit of looking at things through a lens that’s generally non traditional.

>> Craig Gould: When I look at that trajectory, I think some of us would anticipate for you to have a really strong engineering background. But it’s, it’s obvious that, you know, you, you understand the mission and you’re able to understand maybe it’s an appreciation of pattern recognition or holistic views of systems and, and how those things fit together. But it’s, it’s obvious that even without the, the hardcore coding background, you understand conceptually what these threats are and how to counter them.

>> Stu Solomon: I think that’s absolutely correct. And it’s funny, I was actually speaking to one of my colleagues on the human team earlier today, and he was asking a similar question. And I equate it to a sports analogy. I’m also a sports guy. I was fortunate enough to play lacrosse in college and, get to enjoy that as well. But there’s a moment in time where the field slows down, where you can anticipate where the ball is going, where people are moving. It’s more muscle memory than it is actually thinking. it’s the combination of all of my experiences over the last 30 years that allow me to feel that way in a business setting. Now, I think your analogy of engineering system, a complex system coming together, is very, very apt. In this stage of my career in that I can kind of picture and see how the inner workings are going to happen and how they’re going to interact. And therefore, it allows me to start to make different decisions, pull different levers, involve different individuals to be able to create the optimal outcome. But it’s very much around having enough collective experiences in a more broad and diversified set of activities to be able to find those patterns in the first place.

>> Craig Gould: Okay, so I find the way you answer that question so interesting because we could substitute the name of your company for Stu, and I feel like you would answer, what, what does human do? in much the same way can, you know, for the person maybe is sitting next to you at a dinner party and has no idea what human does, how do you start to explain how those same concepts, apply to what human does?

>> Stu Solomon: Yeah, it boils down to answering very basic questions of when you interface with your digital environment, and you interface with that digital environment ultimately to produce some kind of commerce activity or to set up an account and log in and create an affinity relationship with a brand. Anytime those kinds of scenarios are happening, human is probably working to keep the fraudsters out of that critical path at a scale and a speed that’s unknown to most. We’re probably one of the largest, most important companies that you’ve never heard of, but you rely on every single day. Now, let me explain a little bit why. So we are looking and working across the digital ad space. So if you place a digital ad in a connected TV environment, on a website, in a Podcast any number of different scenarios. In a gaming environment, there is a finite amount of space to match, a finite amount of interest to place an ad in front of the demographic that they’re trying to get in front of. Well, as these two, interests come together between advertisers and publishers of digital ad content of any sort, lots of people want that same space. And so an artificial economy is formed to be able to assign, the right amount of value to that space at a point in time. And it’s competitive. Well, not only is it competitive, but it’s also a wonderful playground for fraudsters to play in. Fraudsters, just like any criminal, try to get the most with the least amount of effort. And the best way to do that scale their attack patterns and activities, is to use automation. And automation takes the form of bot based activity. So what we at human do is we sit in the middle of that behavior before and after digital ad content is bid on and placed to ensure that we’re identifying as rapidly as possible billions and billions of times a day, in tenths of a millisecond each time, whether or not there is a human or a bot interfacing with the digital ad content. And if it’s a human, is it the right human or the wrong human? Is it a good human or a bad human? If it’s a bot, is it the right bot or the wrong bot? A good bot or a bad bot? Really important in that discussion point is that human component. And the bot component isn’t a binary decision of good and bad, but it’s a complex matrix of technical indicators and behavioral indicators that give you a point in time view. We’ll talk more about that later, I suspect as we get into Agentix. But that’s a super important component that’s no longer a binary decision of good and bad because you have to live with both humans and bots interfacing with your digital world. That’s the first part of the equation that human does. The next piece. Let’s say that a digital ad is placed correctly and it attracts me as the demographic to it. And I am compelled to do the thing that the ad wants me to do, which is visit their website, visit their store. I go in and I set up an account. Human is there to provide a number of different friction points as well as to look for identifying characteristics that tell you if you can or should be setting up an account, if that account is set up in a way that’s indicative of a good transaction or a bad transaction. And then once you’ve authenticated and you’re now able to go do the thing you do. What do you really want? What’s the whole purpose of the ad in the first place? At the very beginning of this whole chain, it’s to get you to buy something or to sign up for something. So now a transaction occurs and human is there to help protect that against that from a fraudulent perspective as well.

>> Craig Gould: I’m trying to understand more about cybersecurity, but it seems like the bad guys, maybe they’ve purchased a list of usernames and passwords off the dark web and they start trying. You know, they, they create a bot that wants to start, you know, trying. They don’t, maybe they don’t know what site it goes to, but they, they figure that maybe you’re using the same username and password over and over again. And so they’ll set up a bot that will go from, you know, site to site to site to site to site, trying to use this login. I think part of what you’re able to do is understand kind of in real time when that sort of thing is going on. And that’s, and it’s not just one signal. There’s a whole myriad of those types of things that you’re monitoring kind of real time across a whole variety of data points with your customers. Right. And can you talk about maybe the collective aspect by having so many data points simultaneously, the ability to see these things happening real time?

>> Stu Solomon: Absolutely. And I absolutely love the way the question’s framed because that’s frankly the magic behind all of this. we spend so much time, equating AI with a recency bias towards generative AI and maybe a little bit more towards agentic behavior now as it’s starting to come into the public conscience. But the reality is what you just described is really more of a data science and machine learning kind of scenario, which are applications of artificial intelligence. How do you take vast sums of seemingly disparate data and normalize them, munge them together and create some kind of risk scoring that helps you to understand a deviation from a pattern of normalcy or a deviation from an expected risk level that makes something more or less interesting to take further analytical charge of? That’s the big picture. This is just hardcore day in and day out data science that needs to happen. but you go a little bit further into that and you use the word signal in your question. And signal is exactly the, right phraseology for this, which is every single data element that’s out there, both from a point in time as well as across a period of time, aligned with both observable and implied behaviors and technical components lead you to a singular purpose, which is there’s a triangulation, there’s a triangulation between the identity stuff, as you said, compromised passwords, devices, things that are routinely made available to let’s call it malicious actors. Then you’ve got the infrastructure that’s being used to be able to create a connection, to be able to transmit data, to be able to bring two parties together in some way. Then you have the behaviors associated with it both at the time and then over time when you actually triangulate across infrastructure data and behavior now you start to get a three dimensional picture of what something is, good or bad. The decision is very rarely, as I’ve said before, very rarely is it binary, what it really is, is it a higher or lower level of risk and is that level of risk an acceptable level of risk, or does it fall outside of your risk appetite? And by the way, there’s probably a series of mitigating controls that you put in place, further reducing risk to a residual level that’s most meaningful to you. We’re helping to pull all that together using these more advanced analytical perspectives and data management perspectives to be able to come with a triangulation of those seemingly disparate data points.

>> Craig Gould: You mentioned earlier that we would probably come back and touch on agentic AI real soon. And your comment really makes me want to kind of go to that place now because you know, if we’re, you know, if we are gatekeeping and identifying who’s human and who’s not in. I guess up to this point we’ve kind of assumed that bots that are trying to do human things are probably bad actors. But now we’re in an environment where we’re asking AI to work on our behalf to do things. And how do we go about identifying good agents and bad agents?

>> Stu Solomon: you’re exactly right. Let me take a slight step back and then get into that. We have to live with the fact that we’re at this unique crossroads in history. And it’s a crossroads that frankly I don’t think anybody really fully saw coming until very, very recently. But the reality is we, more traffic than not will be, bot based activity on the Internet on a go forward basis. And of that there’s going to be a significantly higher percentage, upwards of 70, 80% according to some studies of agentic traffic versus human traffic on the Internet on an ongoing basis, which leads us to A place exactly where your question goes, which is this is no longer a discussion of bot or human with the presumption that human means good and bot means bad. But the reality is you’re going to have to live with the bot based activity and you’re going to have to be able to ascertain quickly and with a persistent, level of trust. What is an agent that you want working on your behalf, what is an agent that is not working on your behalf, and what activities do you trust authorize and allow that bot to be able to accomplish over time? That’s different than good and bad, because by the way, you could have a certain level of trust, but it still represents risk. You could have a certain level of risk and therefore you don’t want to provide trust. It’s going to be a multidimensional decision that has to be made over time around how much your tolerance is and how much authority and autonomy you’re granting an agentic activity to be able to accomplish on your behalf. It boils down to a singular point and the point I just very quickly glossed over a moment ago, this notion of trust. So how do you establish initial trust that bot, will be the right bot doing the right things, the right ways and the way you want it to? How do you have persistent trust? In other words, my point in time view may change over time as it takes more actions and adds more autonomous activity to its, its scenarios. And then do I have trust in the outcome? Was it the outcome I actually wanted? Did it accomplish the task the way I expected it to? Did it deviate because it anticipated rather than learned? all of these scenarios means that at the end of the day this is no longer a decision of good and bad, but it’s a decision of trust or not.

>> Craig Gould: Okay, so that brings to mind another question, and hopefully I can get that question out of my mind in a way that, that’s understandable. but I’ve, I’ve heard you talk before about a, vision where we want to move to, instead of trying to identify bad bots, bad actors, to creating a, so called envelope of trust around users by identifying users in understanding, their behavior, understanding who you are as a user. I think of it almost like as a behavioral biometric. Understanding these are the things this user would do. This is an anomaly for this user and that’s something that you can carry from entry point to entry point. Yeah, I guess my question is, I think I understand that in the human sense, but if I am spending half of my Time as a human making these decisions, going to these environments. It may be I have an agent using my logins or I’m asking my agent to pose as me to do have these things. How do we reconcile that into one personality profile for these users going forward? Does that make sense?

>> Stu Solomon: It does make sense. And immediately, as soon as you start talking about a persistent digital identity, anybody who spends any time in this field immediately starts privacy concern, privacy concern, privacy concern. And rightly so. This isn’t about building a profile for Craig. It’s about building a profile around user one that’s empowered by Craig to go and accomplish things on his behalf with autonomy and authority to act. It’s more about that triangulation again of infrastructure, data and behavior to be able to do a completed transaction of some sort consistent with the outcome that you want. By the way, it has a history and a pattern of behavioral outcomes. Did it use infrastructure that was known bad to transmit bad things? Did it use credentials that were known to be compromised? Did it use behavioral patterns and traits that were consistent with organized, criminal activity? You start to see scenarios where it starts to triangulate around a cluster of behaviors. Those behaviors are indicative of a grouping. And then how much permission do you, the human being, assign to that cluster or that grouping to be able to act on your behalf? That’s the way that I would generally think about. Starts to create a layer of obfuscation between the actual human being, who you want to protect, and the thing, the machine that has been, given the authority to transact on your behalf.

>> Craig Gould: I thought it was interesting. I’ve heard you say before that something along the lines that, you know, these bad actors, they also have limited resources, right? And so by the nature of them having limited resources, we know that they want to get the biggest bang for their dollar, which allows us to kind of understand what type of transactions or accounts that they would want to target, because these are the ones that would have the, the highest return. Right. Can you talk about how you guys go about trying to play this game of cat and mouse with the black cats?

>> Stu Solomon: Yeah, absolutely. First, the kind of core premise of the discussion really does focus on how do you impose cost or slow down or add friction to adversarial behavior. And I absolutely agree. In most cases, logic tells you that a criminal is going to go after the biggest payday with the least amount of effort and the least amount of risk. But what’s really interesting is over time we’ve found, particularly with automation and now with the advent of AI and particularly Generative AI barriers to entry are significantly lower than they’ve ever been. For individuals with nascent skill to be able to create an outsized impact or outcome, which means you’re not always going to see people going after just the most important targets because they’re going to go after what they can. And that the scaling ability associated with all of the generative tools that are available and as a service tools that are available, frankly that you can get your hands on if you just look in the right places. This now creates the ability for people who would never dream of this before to be able to give things a shot and just try it out. So there’s an element of that. And so as you’re doing that cat and mouse game, the notion that an attacker and a defender or on an equal playing field is just simply not true. Right? Attackers inherently have an advantage right away because they have first mover advantage. They make the first move, they develop the first tool, they find the first vulnerability. They look at the weakest link in a chain. And then as a defender you’re constantly trying to a, gain visibility into their thinking and their capabilities. But two, to anticipate their next step. So what the real opportunity is, how do you shorten, shrink the time between the dynamic nature of an adversarial activity and a defender’s ability to deploy either signature or observable scenarios to be able to identify that action. So it’s a game of cat and mouse, but it’s also a game of thinking and acting and understanding what impacts could be created by whom and how and then building your detective and defensive controls to avoid that impact rather than to void, to, to identify that new tool or that new technique.

>> Craig Gould: I’m sure your, college education included a good deal of military history. And you know, it just so happened last night I was, I was reading about the Maginot Line and how the, the French spent, you know, what, 15 years basically building a series of, of walls and forts and many, many fortresses all along the line with Germany after World War I because they, they didn’t trust that this wasn’t going to pop up again. And they, they figured that they could create this line all across their border and that they wouldn’t necessarily have to worry about the north because the, the, it was so heavily wooded, up next to Belgium that you know, surely that, that’s impenetrable. But you know, by the time, you know, 1940 comes around that the ability for, for armored tanks to kind of navigate through the woods in that one spot just kind of laid that whole strategy to bear. So in, you know, it’s not unlike the Great Wall of China. You know, it’s, it’s not about the size of your wall. It’s about that one weakest point.

>> Stu Solomon: That’s right. It’s about the weakest point. It’s also about the impact. So, equating it back to, a cyber scenario, an attacker has a finite number of tools to create a finite number of impacts, taking advantage of a finite number of vulnerabilities. Now, when I say finite, I think that’s being appropriately dismissive. There’s obviously many, many, many vulnerabilities and many, many activities that can exploit them, but there’s only a few impacts that can be created. You’re going to steal information, you’re going to gain access to be able to exploit later. You’re going to steal money, you’re going to disrupt an ability to deliver or operate. There are only a number of very specific impacts that could actually be created. So if you work your way backwards, if I was, you know, the French now, what would I say? I’d say, what do we need to do to avoid the ability of the land on the other side of this to be taken? Not slow them down, but what happens if they flew over and occupied? Would I build my, my defensive line differently if my real goal was to avoid occupation? A, finite, a specific impact that could be created? And then you work backwards from a defensive control perspective. Ah, it’s actually a technique that most threat modeling experts would talk about as well. It’s that same idea. Because once you understand what impact could be created, you can say, all right, what are the paths that could create it? What is the likelihood and severity of the impact if they went down that path? And then where I put my controls? It’s the same logic that goes into looking at and preventing fraudulent scenarios or malicious actors.

>> Craig Gould: So do you feel like your military background helps with that sort of logical thinking?

>> Stu Solomon: Most definitely. Most definitely. look, at the end of the day, I’m a military guy through and through. Even though I’ve been working in a civilian workforce full time since 2005, I’m still in the Air National Guard. I’m still proud to wear the uniform and serve. And again, just like the anecdote we gave earlier around law school, it’s just as much about continuously touching the lens in which you consume information and ask questions and learn as it is about executing against a specific activity. but certainly I strongly Credit my military background into helping me transition into the executive that I’ve been able to become.

>> Craig Gould: Do you feel that the different branches have a little bit different logic? You were trained in something that’s not necessarily a ground game, right? I mean, you know, I think when you were talking there about how would France today, defend themselves, you proposed an attack that. That came from the air.

>> Stu Solomon: So I guess this is where I insert an army joke. But I, I wouldn’t do that. I wouldn’t do that. So instead, I’ll talk about the Navy. No, but, but in all sincerity, you know, I recall in, in the, in my military training back at the Air Force Academy, I distinctly remember in the military studies classes looking at Warden’s concentric circles of, how to attack, a government, a hostile government, and specifically looking at the different nodes of power that could be attacked in certain ways to be able to create, ultimately, the breakdown between the ability of a government to maintain command and control of its power, military forces, and maintain legitimate position inside the minds of the populace that it’s been elected or anointed to be able to govern and provide safety and security for. And thinking about how air power was used in Warden’s theories, to be able to create those kinds of impacts. And those lessons, literally more than 30 years later, still stand with me today.

>> Craig Gould: With that in mind, can you tell me about how human works with government? Because it’s not just the private sector. I mean, governments are also concerned about bad actors. I imagine that’s also a big part of what you guys do.

>> Stu Solomon: Yeah, absolutely. So I won’t speak specifically about how human does it, but more of a general conversation around public, private partnership, as you’ve introduced, I think, is a really critical part of the cybersecurity, frankly, conundrum. the notion that, a lot of cyber activity is classified at the government level because it occurs against classified targets in classified systems, against classified adversaries, frankly, is a little bit of a quaint notion. The reality is cyberspace is the interconnecting, component that brings society together and brings land masses together, and it brings people, people and governments together. And so the ability to freely transact across the board means that you can see that activity, you can interface with that activity, you can start to, disrupt that activity. And that happens at a pace that, frankly, the innovative capability of the private sector outpaces the ability of the public sector components to be able to do that in a rational way. Which means governments actually learn a lot more from industry than Industry learns from government in this particular and unique scenario. What it’s really incumbent on us on is number one, to recognize that, and then number two, bring together both sides to be able to share tools, share information, share outcomes, and ultimately protect the nodes of power that everybody needs to protect for society to continue. And so conversations like this always lead down that path. And I think that’s a critical one to be able to really point out.

>> Craig Gould: You know, as a technology company, you either innovate or you die, right?

>> Craig Gould: How do you drive innovation? How do you maintain curiosity? How do you encourage your employees from a culture perspective, to have them continue to strive for innovation? I mean, do you, ascribe to a team that’s just thinking about innovation strategically as a skunk works, or is it across the organization that you ask everybody to be thinking about, how can we do this better?

>> Stu Solomon: I’m uniquely positioned as a technology executive to be the luckiest guy in the world, because technologists are inherently curious already and innovative already. So my job is actually not to screw that up and to find a way to foster it. it’s really funny. I’ll give away a little secret for anybody that ever interviews with me in the future, and certainly anyone who’s interviewed with me in the past, knows this, regardless of what the position is, if you get in front of me to interview, whether you’re in an engineering role, if you’re a science, in a data science role, if you’re in a sales role, I’m still going to ask the same question. And the question is, always tell me about how you set up your home network. And honestly, I could care less what they answer, but how they answer it. And inevitably every single time somebody gets a huge smile on their face and, and they geek out right away. And if they’re geeking out right away, I know that they are a technologist at heart. Their innovative spirit comes through because they’re going home at night learning more and experimenting more than anything I could ever challenge them with at work. And that’s when you start to see the kind of the core DNA that you want to bring into your company. It’s that, that innovative spirit. So that’s the first part, the second part, you know, innovation happens particularly in the technology front, when you surround smart people with other smart people and you give them hard problems to solve. It’s amazing how they challenge each other and thrive on that energy. And then I’m just the lucky guy who gets to put a business wrapper around it and take it to the market.

>> Craig Gould: What do you see as the role of the CEO? Is it maintaining, the cadence? Is it the allocation of capital? Is it making sure that people understand the culture? Where do you focus your day to day activities? Where do you see the biggest contribution that you can provide as a leader in the C suite?

>> Stu Solomon: It’s a difficult question to answer precisely because it’s so many things. the way I look at it is you always have a responsibility to create shareholder value. You always have a responsibility to be fiscally responsible. You always have a responsibility to hire well and to make good decisions. Those are table stakes for that’s what allows you to wake up in the morning. But ultimately I think it’s really twofold and they’re very much related. The job is really to define and set that North Star and help to inform people and align their activities day in and day out. even if they don’t know how they connect back to ultimately creating that North Star. And that’s very much aligned to the second piece, which is if your job is to get your teammates and your people there, your other responsibility is to your people. I talk about this in our newcomers orientation every single month and I firmly believe this. My job, my responsibility is to make sure that I’m putting a roof over the head and food on the table of all the families that I’m responsible for on a daily basis. And that sounds a little bit corny, but it’s legitimately true. Because if you’re looking out for the people that are building and running the company day to day, they’re going to build a great product, you’re going to be able to hit all your sales targets, you’re going to make the money and create the value that you need for your shareholders. And most importantly, you’re executing across a mission that everybody agrees with and lines to.

>> Craig Gould: So looking back over your career, is there a single most important principle that you’ve carried into every one of these leadership roles? Do you have the outline of your first book when you retire about your leadership principles? Have you identified those? Or is it because sometimes it’s just, it’s so programmed into our DNA that we don’t even stop to figure out what they are. But what do you think is, on the top of your head?

>> Stu Solomon: First of all, I think it would probably be a very boring book. But if someone did want to read it, instead what I would say is the traits that I have found that have worked for me and the traits that I look for in my C Suite in particular is learning agility and an ability to deal with ambiguity. And if you’ve got those things, or if you’re constantly curious and trying to get those things, learning agility and the ability to deal with ambiguity, you’re going to be a great executive. And I think that’s page one.

>> Craig Gould: Sure. So if I, if I were a vp, an svp and I have a goal of getting to the C suite, you’ve done that, you’ve been there, you now sit at the C suite and you consider the qualifications of folks that are rising to that level. What advice can you give to someone to gird themselves, make themselves prepared to you know, establish a moat around, around their qualifications that allows them to be perceived as, as ready for that transition.

>> Stu Solomon: I think it’s, it’s the seeming almost dichotomy between the perception and the reality, which is people want to be in the seat because they want to be in charge. That’s not actually the opportunity. The opportunity is because you now get to drive and make risk decisions that ultimately lead to that mission outcome that you’re trying to accomplish as a company, as an organization, as a person. It’s less about being in charge and more about being able to take more control over the destiny. And that’s a very, very fine distinction between the two. And I would offer that that’s usually the hardest leap for people to make when they go from, from kind of that SVP level into that C suite level.

>> Craig Gould: How do I establish that? Is it asking for more opportunities to show what I can do with a little, so that I can be given more and hopefully a lot eventually? Is it somehow demonstrating how I can operate within a level of risk? I imagine part of it’s also the ability to not only work autonomously but also how to manage a team, within that autonomy. Right.

>> Stu Solomon: Certainly there’s a dynamic. We haven’t talked about this much today, but you’re right. There’s certainly a dynamic of leadership and management. There’s certainly an element of teamwork. but I think what stifles people aren’t those things to get to the very top. It’s the ability to manage themselves and their own expectation. It’s the ability to have the ability to act and make risk based decisions. I think ultimately is the key. And then being very decisive, decisive, thoughtful, not getting caught up in the decisioning cycle or loop too much and then honing your skill and realizing that, wait, maybe I don’t like this. Or maybe Maybe it turns out that that’s a little too much pressure. I think that’s a really important part of it. So yes, if you can’t manage a team and manage individuals, you’re not going to get to the level in the first place where you’re considered. But it’s very, very lonely at the top. When you get to the top, it’s really about how do you manage yourself effectively.

>> Craig Gould: You know, it reminds me, here in a conversation one time where a person was speculating that just how many unhappy NBA players there are because people just happen to be born seven feet tall. Right, right. And so it’s like just because you were genetically gifted that way didn’t doesn’t mean that you ever really wanted or that you were passionate about the game. Right. And so I guess, you know, part of what I’m hearing is just because the C suite is the, the top doesn’t mean that everyone is necessarily cut out for the top. They may be better suited thriving in a different role.

>> Stu Solomon: Yeah, and that’s a difficult concept because it sounds very elitist when you look at it that way. And that’s certainly not the intent of my comment. It’s not, well, there’s only room for a few people at the top. It’s that, it turns out that, you might be able to be more effective and get more done in other portions of the organization. I’ll give you an example. I had somebody ask me the other day about how much, I enjoy the CEO role. And the answer is, you know what? I probably enjoyed being a CEO a lot more because I was a little bit more operational, a little bit closer to the action day in and day out, a little bit more likely to jump into the details more in a thoughtful way. and instead now that the skill of the CEO is to take lots of data, rapid fire, without a lot of context and have to make decisions quickly. And they’re just completely different roles with completely different responsibilities and different outcomes.

>> Craig Gould: What is the future? What is the future for human? What’s the future for you? what can we expect? I enjoyed in college studying futurists, strategic planning, scenario planning know, and they would always, you know, come up with these scenarios for that are like 10, 15 years out. And like it feels like our, our, our horizon is so much shorter now because technology is growing so fast. What certainty can we depend on what, what can we safely assume is going to happen over the course of the next five years and how should we prepare ourselves?

>> Stu Solomon: So I’ll go Back to the theme that we, we spoke of earlier. The, the reality is that in, in a interconnected digital world, particularly one that’s driven by E commerce, the likelihood that a human being is the one making a transactional behavior at the end of every single transaction is going to continue to rapidly reduce. and as it reduces, we’re going to have to create a scenario where we give more trust to things that we just haven’t trusted in the past and that are inherently untrustworthy in kind of our common nomenclature and activities today. That is agents. As those agents are granted more trust, we’re going to have to put guardrails around them and continuously watch as they evolve. Now the question is, will they evolve faster than our ability to control them? I’m not going to get into a Terminator, two moment this late in our conversation, but I think ultimately that is the, the necessary component of this discussion that we’re going to have to look at, which is at what point do we put guardrails and where do those guardrails take us? And by the way, guardrails that we create today aren’t necessarily the ones that we’re going to need a year from now or two years from now. So how do we create a dynamic nature to the way that we build our control environment? That’s now, that’s today, that’s not tomorrow, but it’s going to continuously reshape the way that we interface with our digital world for the next five to ten years.

>> Craig Gould: Without doubt, when we talk about guardrails, I just find it really hard to imagine a world where every geography around the world, every country can agree on having guardrails and sticking to it? It just seems. Yeah, so then it’s how do you deal with the fact that some people have guardrails and some people don’t, and how do we deal with that threat?

>> Stu Solomon: Yeah, and look, as I mentioned earlier, I really feel like the Internet in and of itself, is an ultimately democratizing scenario. It breaks down borders, it breaks down geographies, it gives people an equal view and equal vote, an equal footing because it grants access. So I think the real conversation is how do you ensure that that access is persistent for everybody to be able to use, and therefore it becomes the ultimate leveling of society. That’s going to be an increasingly difficult scenario, especially as, agentic behaviors and AI scenarios start to necessitate local decisions around what that governance framework for them looks like and what regulatory and legal frameworks look like. And I feel strongly that if we leave it up to single countries, we’re going to miss the boat. This is a global transnational discussion that needs to be held.

>> Craig Gould: Well, just one more thing to keep me up at night. Stu, this has been a great conversation. I really appreciate your time. And, man, I really appreciate you, being my guest this week.

>> Stu Solomon: Craig, really, thank you so much. This was, super fun, and I loved it, so thank you.

>> Craig Gould: Awesome.