Full Transcript: Kathryn Hume on the Product Love Podcast

Published Aug 30, 2018

This week on the Product Love podcast we revisit the episode with Kathryn Hume, the VP of Product and Strategy at Integrate.ai. Like most product leaders we’ve featured on the podcast, Kathryn’s journey to becoming a product manager is definitely an unusual one.

Kathryn first got her Ph.D. in comparative literature before realizing that she didn’t want to become an academic. When she moved onto the startup scene in San Francisco, she discovered her passion for machine learning. Her academic background in literature and mathematics made product management a natural fit for her. 

Kathryn and I spoke back in April about everything from robotic surgeons to how the best product manager should be like Sherlock Holmes, and even cracked an AI joke. I am now happy to share a lightly-edited transcript of that conversation. Whether you prefer audio or love reading, I hope you enjoy it!

You can check out the original post here and stream the audio version here or subscribe on iTunes today.

Eric Boduch: Welcome lovers of Product! Today I am here with Kathryn Hume, VP of Product and Strategy, at Integrate.ai. We’re going to spend a little bit of time talking about her background in product, why she’s passionate about product, and how artificial intelligence and machine learning are going to affect both the profession and craft of product management. We’re also going to be talking about how product people should be delivering their software services to their customers. With that, Kathryn, why don’t you start by giving us a little overview on your background?

Kathryn Hume: For sure. Thanks so much for having me, Eric. My background and journey into product are somewhat strange. I started as a math major when I was in undergrad and then I did my Ph.D. in comparative literature, and focused on the 17th-century history of science, math, and literature. I was planning on becoming an academic until I realized there were not a lot of jobs and I didn’t want to end up working in Nebraska going from postdoc to postdoc. I was out of Stanford and in the heart of Silicon Valley. I decided I wanted to try my hat in the tech community and I worked at a couple of startups based out of the Valley prior to entering machine learning which is now just my passion. I’ve been in the field for a couple years and coming from a joint background between mathematics and literature, the role of product and sitting at the center of the organization between the engineering team and the sales and marketing team, was a natural fit. It took me a little bit of time to evolve into it but I’ve been happy since I’ve been in this position.

EB: So what in particular makes you so passionate about product?

KH: My experience, since I have been working in the tech industry, has largely been on the B2B side versus the B2C side. A lot of my comments here are going to be grounded from that perspective. I find that first off, I love working in early-stage companies like startups that have yet to find product market fits. I’m just a glutton for all the ambiguities, hard choices and hustling that it takes to get a company off the ground and try to align the technically possible with the appetite and needs of an existing market and all of the work and dialectics and conversation that it takes to get there.

I’ve found that in my experience that there is sort of two different paths. The one starting with the technical capability where, especially working in machine learning and AI these days, there is just a very rapid progress taking place on the academic side every day. If you go on this website called arXiv, which is a mainstay for academic researchers trying to understand the latest and greatest in technology. As opposed to publishing peer-reviewed research papers these days, a lot of the academics just publish their work there. There will be amazing new things that are possible, some of those things are ready for primetime and commercialization. I think that first step of going from what’s made it to the way of the research paper down to something that you could reliably deploy and scale within a product is interesting, and it’s changing very quickly within the machine learning world. One of my favorite examples is thinking about image recognition. The ability to show a computer an image of a cat and an image of a dog and have it automatically recognize that, without any mediator, without any text, that one image is a cat, and one image is a dog. That capability was just fairly possible in 2014-2015 when I started my career in machine learning and by now, Google has already open-sourced it as a tool that you can download on the internet. So it’s gone from a possibility to a fully commoditized capability in three years. That doesn’t mean it’s reliably deployed to solve real people’s problems.

You know, I just love going from a useless application to something that is actually widely used and adopted with that pathway. On the flip side of some of my experiences, on the B2B side, is starting off an idea and a market need, and then aligning the technical capabilities that can really solve that need. A market-led strategy poses a different set of problems for developers who then have to work a lot harder to find the underlying general requirements across a set of desperate individual customer needs and then the right technical solutions to solve that. Different personalities and different types of motivations will align well and go for the second category. I think it takes a lot of self-awareness for a product manager as well as the people across engineering, and sales teams to figure out which set of problems is most inspiring for them.

EB: So you talked about one past trend. I can see cat or dog, I’m not sure how commercially viable that is but I’m sure it’ll get quite a lot of clicks and uploads of photos. As far as trends for the future that affect the craft of product management, what do you see coming up that we all should be aware of?

KH: So in terms of things that are commercially viable, my boyfriend actually used to work for a company that did porn detection. So actually, I think there’s a Silicon Valley episode about this, maybe in the first or second season where they used these neural network image recognition tools. They started wanting to do porn detection but ended up actually doing hot dogs.

So these companies actually exist, and presumably, have great market potential because there’s certainly a lot of porn on the internet. In terms of the ways in which AI is shifting around the craft of product management, the most impactful shift to me relates to the core philosophical shift in how these products are built and so a lot of the web apps that we know and love were built upon a standard deterministic of a software development paradigm where it’s possible to plan, scope and either use a waterfall or agile product development methodology. You know how long it takes to build something and sometimes the code can be relatively complex so you know it’s going to take longer and sometimes it’s just a short add-on to an existing set of capabilities. When we shift into machine learning, the wisdom of the crowd is that suddenly the programs program themselves. You know if there is no code required then there are just machines that get smarter and smarter over time. That’s the little bit of a pipe dream of interesting research and domain of automated machine learning that might get us to that phase sometime way off in the future.

These days, a lot of it is about designing a really great scientific experiment so starting off by looking through which strategic problem data might help a company solve. Designing and scoping down an experiment to test to first see whether or not it’s possible to even build a feature or ask that question in the first place using data. Second, start to do some statistical analysis to see if there’s actually something to see if there’s a signal in the data to solve the problem and lastly to design. Sometimes it can be A/B testing methodology, sometimes it’s a different type of control experiment to see if using this data and defining some sort of outcome you can actually achieve the goal you’re seeking to achieve or disprove the hypothesis you’re seeking to test. That scientific method requires a slightly different management strategy than the tricks of the trade in agile and even waterfall. Product management has developed and it requires a different way of scoping because a lot of things fail.


So you have this great idea and it seems amazing. At first pass, it looks like the data is structured in a way that it might support the hypothesis but then once the machine learning scientist looks through the data, they realize that there’s just nothing there. There are all of these very rapid adaptations that need to take place in the beginning of the product cycle to make this work. There’s also a very different type of liaison between the business side and the technical side that needs to recur because the machine learning scientists don’t just solely go through to find patterns and data or they have the machine find patterns and data that may be meaningful.


The best products are actually built with a tight alignment with somebody that knows a lot about a particular business process or domain. It’s also about having a meaningful conversation with the machine learning team to help them process the data, work with it, and ask the right questions so that they can lead to something meaningful. I think, just to sum that up, a lot of the basic structure is the same whereby a product is going to work as a middleman between a user’s need and the engineering team’s backlog but there is a midway step that introduces much more certainty than standard product management paradigms.

EB: Let’s step up a little level and talk about business use cases. You know, specifically at the enterprise which is an area that we have experience in. What interesting uses of AI and machine learning have you seen in the enterprise?

KH: So I’ll talk a little bit about our particular perceptive here at Integrate. We are working with  large consumers in enterprises that have customer bases in the millions and tens of millions and helping them shift from their traditional marketing stack where they might do some segmented oriented targeted marketing so pick out some demographics, some section of the population and try to target product offerings and messages in a way that would resonate with their assumed personality. But often those segments are fundamentally based upon roles and a person has to go in and update those roles as they learn about campaign effectiveness over time. There’s a lot of tooling around analytics that can be used to inform judgments on what the next campaign looks like to get better success. But it’s not really a machine that runs on its’ own. What we’re trying to do is shift this around to true dynamic lifetime value optimization where when a customer comes in on day one, there’s some look-alike type mapping that goes on to gauge the extent to which they’re like successful historical customers to forecast their predicted lifetime value and how long they might stay and predict the next best action that will be relevant for their wants and needs. Using this machine learning apparatus, we get a lot of feedback on what people are actually responding to so that it can dynamically optimize to their wants and needs over time. For me, what’s most exciting about that is less of how it becomes a sort of automated marketing stack and more that it leads to a tighter relationship between product and marketing.

The founder of our company came from Facebook and is obsessed with this “Aha!” moment within a customer’s journey where we interpret this “Aha!” moment as a set of early actions in a customer’s journey that tend to be tightly correlated with high lifetime value. If you notice that, you can just focus on all of your efforts on getting that one early action to fruition. So the example from Facebook is the epochal ten friends within fourteen days. The lifetime metric that they want to optimize for is daily active users which is super hard to measure and render trackable. So they notice doing some analysis is that if you get early users to 10 friends within 14 days, that tends to be tightly correlated with the long-term metric that they’re tracking. All of the efforts went in to get them there, so send them emails, suggest friends, and do everything to meet that one small metric. We are working in the construction of our platform to try to replicate that kind of tooling for a large enterprise. If somebody gets engaged and is interacting with a brand early on, that often is correlated with their perception of value in the service so I think that there’s something altruistic in the way in which it can perform in the product. Other stuff that I’ve seen that is super interesting was the most interesting project in my last company, Fast Forward Labs, an AI research lab in New York City, which was recently acquired by Cloudera. It was with a medical device manufacturing company that was making a robotic surgeon. It was a completely automated device that would perform prostate cancer surgery actually on patients without the intervention of the human. That was a long-term goal. We were brought in early to do some video analytics work to automatically identify four key instances in a prostate cancer surgery procedure that tended to be the highest risk and we were automatically identifying them using our machine learning tools and guiding the human users of the tool on how to behave on these risky areas. The system was collecting and logging all that data to put on the past to future complete automation.

A funny side story was that the hardest part of the project was that the data was disgusting. My machine learning colleague and I were first sent the sample of the data and it was like these weird bloody, photos of men’s prostates and we were like, “Ugh.” The technical problem to be faced had nothing to do with how hard the artificial intelligence was. It was just that we kind of wanted to vomit when we looked at those videos. The last thing that I’ll mention is that I think the uses of AI in the professional services world in law firms, accounting firms, auditing firms, is quite transformational and impactful. It’s all of the people-side of things so regulations and fear of change and all of the obstacles to innovation and embedded inertia that makes adoption hard, is slowing the path to legal services being a different business than it was in the past. There’s a lot of repeated texts, and content and work that’s done in these environments. There are also environments where unlike say a self-driving car, there’s not a ton of improvisation and the work need not be carried out in a second so there’s a sort of instantaneous judgment. It’s stuff that can be done in a couple of hours or even a couple of weeks and that kind of long time frame plus repeatability plus huge amounts of past historical data of the right response is sort of a perfect recipe for automation machine using in AI. I think this shift from a build hour model to something where customers are going to expect repeatability and consistency is really going to have a big impact on those industries in the future.

EB: So let’s talk a little bit more about the robotic prostate cancer surgeon. Now I can imagine that people would be a little apprehensive about that. There’s a lot of apprehension about self-driving cars, but it’s one thing jumping into the car and then having an automated system drive you. There’s a certain degree of apprehension about that. It’s a whole other thing, I would imagine, thinking that someone that is actively cutting you. Your life, in essence, is in the hands of an artificially intelligent robot. What did you see from that perspective?

KH: That’s a great question. We were so early in the product development process that I did not see a lot of data on this actually being used in a customer environment. To date, there are five levels of autonomous vehicles and the autonomous vehicle goes into play when you’re parking your car to the point, where it’s most of the time, autonomous. You still have a driver there just in case you need to step into the point where it’s fully autonomous. It’s sort of on that maturity pathway with robotic surgeons too. We were only at the stage where there is still a human operator. It was just assisted by the machine learning tool. In that discourse these days, you talk about augmentation versus automation. I think the stage we were at was certainly at augmentation where the machine learning algorithms were setting off alarms when the human operator of the tool was reaching a place of high risk, which I think would add further confidence as opposed to the concerns about liability and allocating actual autonomy and agency to the machines.


Going forward, that’s a great question and it’s funny that I think about the use of machine learning in medical contexts. I’ll often go back to this notion of probability versus certainty and casualty. There are all these discourses these days whether or not the machine learning algorithm is explainable so it might come out with a prediction and say, “Here’s your lung, we are 95% certain that you don’t have cancer.” The national question that people ask is, “Okay, so why? What did you see? What was in the image of my lung?” A human may or may not actually know the answer to that. Normally, we at least can rationalize our response. One of the big issues with machine learning is that deep neural networks that are powering a lot of the new breakthroughs. They’re so complex that it’s impossible for them to articulate why they’ve made a decision. It’s just a correlation between a bunch of data points that tends to being statistically diagnosed to having cancer. So that tension where they don’t explain themselves is, I think is a huge social barrier to these adoption tools that we’re starting to see and increasingly going to see.

EB: Yeah, isn’t there a joke about a neural network crossing a road?

KH: For sure, yeah! That was my friend who works at Reefing Robotics in Boston was the one who first introduced me to that.

EB: Wanna tell our listeners since they may not have heard it?

KH: Yeah, for sure. It’s a standard “why did the chicken cross the road” joke. So why did the neural chicken cross the road? The answer is, “We don’t really know but he sure as hell did a good job doing it.” That’s kind of what AI is today. It’s like not sure why, but shit! That’s fantastic.

EB: Absolutely, absolutely. Looking into the future, how do you see AI enriching people’s lives?

KH: Well, I think to a certain extent, this depends on what people value. For each individual, the axis and dimensions through which their life enriched are particular to what we value. For me, I find some aspects of personalization to be a little creepy. So taking data across various contexts and cobbling all that up together to target some ad. I think there is some incredible value of personalization that might be enabled in the next five to ten years. Google, for example, is working on updating their hardware on their Android devices so that each Android is going to have its’ own graphical processing unit (GPU) which is the type of hardware that supports the latest and greatest in AI.

Since the hardware is pushed out to the device, it can get to know each individual on an individual basis. So it’s the device that becomes personalized as opposed to the apps and the software. I think that there’s some privacy protecting guarantees there where Google doesn’t actually need to collect all of your personal information in order to provide you with smart products. It can just do all that processing locally and just sort of have best practices that are shared across populations with the company as a whole. It’s a pretty big shift in how we think about privacy. What’s cool about the capabilities of that is that it enables wonderful personalization where you can be introduced to experiences and discoveries you wouldn’t have otherwise. I travel a lot for my job, giving talks in different places around the world. Like the example of flying to Berlin, only having 24 hours and wanting to really to optimize my experiences without having to do a lot of research work in advance. I’m too busy to scan the internet and read through travel guides. So I would love to have a device that knows me well enough to suggest some really cool avant-garde theatre that’s nearby the conference where I was giving a talk and a great sushi restaurant that’s right nearby so it’s really aware of my taste and preferences and can help me make the most of the short time we have in life.

Going back to fears as we talk about AI adoption in the workplace, people are really concerned about job loss and the future of work and machines automating away a lot of the work we do today. I tend to believe that we can’t imagine some of the new work that the use of this technology is going to create. A lot of the tasks that it can’t automate are those tend to be repetitive, tend to require less creative thought, strategic thought, and synthetic thought. There might be a great shift in what our culture values, moving towards what generalists can provide and the emotional side of things. It would be a wonderful world if fifteen years or fifty years in the future, teachers, and nurses and those who have more soft skills suddenly, the scale is tipped and those are the types of roles in society that are most valued because technology can automate away some of the more quantitative work.

EB: So pulling you back to product managers, how do you think product managers should think about integrating AI into their product offerings today?

KH: So first off, I think not every product should have AI integrated into it. There’s a ton of hype around the space and I think just about every company out there is thinking about what this means, and how they incorporate or integrate AI. I think there’s a lot of functionality that can be accomplished using rules or deterministic systems and still provide a lot of value to users.

The first thing product managers should ask is if AI is really required or if they would be better served by sticking with a different type of technology stack. The next is to really align the use of artificial intelligence with the problems that a company is trying to solve. I like to think about machine learning products as solving one of two types of problems. There can be operational problems so this is the kind of stuff we do at Integrate.ai where it’s less about what product is being offered to customers and more about getting people to engage more, whether by reducing churn or getting more customers in. This is less on the product side and more on the business ops side. There’s all sorts of opportunities to use data in creative ways, to engage with larger customer bases and optimize those kinds of relationships. When it comes to the actual product, all of this starts with the data that is available to train the algorithms. So sort of the first pass for product developers is to think about what kind of unique and proprietary dataset they might be able to amass based upon the ways in which users engage with the products.

If I could make a change as a product manager in the next couple of weeks that requires not a lot of planning and have incredible leverage and differential impact, it would be to add some sort of widget or pixel onto the product to start to collect the new sort of data around what users are doing. With that data, the sky’s the limit. The big breakthroughs in terms of new developments in the research community deal with being able to use richer types of datasets then we are able to use historically so like images, text, and video. Moving the mindset from needing to have these structured oriented data points to make any sense of algorithms to thinking about the types of capabilities that are available with images and text where there’s really some watershed opportunity. And speech, as well. I didn’t even include speech in the mix.

The other thing is that it’s a more of a front-end design principle than it is a back-end AI principle but all of the hype around the conversational interfaces and making interfaces more natural for human users. I think that in terms of a great product experience, there’s all sorts of ways in which the front-end can be more modified to make it more natural for people to engage with products.

EB: One of the concerns that come up on the other side of things is this privacy. Artificially intelligent ad systems or machine learning based ads system learn more and more about our habits, our behavior, our likely purchasing intent. Where do we draw this line on privacy? Now I’ve heard on the digital side that, we obviously heard about Facebook and the issues they’ve had most recently. I also hear from other people in the media industry that what Facebook knows about us is not nearly as much as some of the credit card and financial companies know about us. The digital side, obviously, is a lot more visible to a common everyday person. Going back to that thought on privacy, how do we allow artificially intelligent or machine learning based systems that serve us ads that are more optimal, more geared towards us, or actually do things in general like you were talking about the theatre recommendation. They need to know a lot more about you to make that theatre recommendation. So how do we get this value from them, and at the same time, draw that line on privacy? Where do we draw that line on privacy?

KH: That’s a great question. There are all sorts of sub-questions, some of them are technical, some of them have to deal with legal structures relating to consent. I love Zeynep Tufekci’s article in the New York Times after the Facebook hack where she basically says that a lot of the legal architecture and infrastructure that is set up there, is based upon this notion that we as users can come in, and we click some button or just by means of using this service, we are consenting to whatever data policy privacy exists on the backend. It’s kind of a legal fiction at this point. A lot of people in the privacy community have come to terms with that. We anticipate we are going to see some updates to regulations and thinking about privacy law in the next couple of years.

It’s already happening in the European Union with the new GDPR regulations that are going to effect in about a month and a half and it would be another podcast length of material to talk about what those mean. Technically, the thing I’m most excited about is a new privacy technique called differential privacy so we went back to that example with the Android where the device is going to be personalized and it’s really going to know me. What’s remarkable about that and is a big win for privacy is that it actually means Google never has to directly collect all of the personal data. It can still get smart about trends across customer bases without needing to house all of that data centrally. This kind of architecture is going to be sort of the wave of the future, where things become much more distributive as opposed as to having this centralized cloud model where the big behemoth companies ala Facebook and Amazon are sort of housing everything on their own site.

What enables that is a privacy technique called differential privacy that is the best fit for the machine learning world and it’s one where the value of data lies within the individual data point. It’s basically a flow of information, and the data would move from place to place. It might be on our computer, where we might pass it to our healthcare provider, or we might pass it to our bank. When we think about AI products, they really work at the level of statistical distributions and so, I, Katherine, and you, Eric, just become one data point in a normal distribution, one of those bell curves. What’s interesting here is that the privacy game actually shifts. When we think about being an individual, and someone being able to recognize us and pick us out of the hat. In the old world, the more data you had, the higher risk you had that you would sort of find something that could compromise the business or find the individual. In this machine learning distribution oriented world, if there’s a ton of data points in that bell curve, we get lost. We’re just one of many. It’s a stronger supported privacy with the more data that there is. There are techniques we can use to basically add a little bit of noise and modify the math slightly so that it becomes impossible to reverse engineer the statistical distribution to find the individual. It’s a little bit technical so happy to talk about that more. The moral of the story is that there’s this ironic paradoxical twist that it might actually be the more data, the better when it comes to protecting individual privacy rights. As long as we add on additional technical features that can shift around the data to make it safe while also continuing to provide the value we need for our machine learning algorithms.

EB: That will be a tough story to sell to the public: the more data, the safer you are as far as being personally identified. It’s gonna conflict with people’s common sense. At the same time, common sense doesn’t always seem to be right. If you think about self-driving cars, I talk to people and they’re like, “I wouldn’t want to get into one until it’s 100% safe.” Google tells me that 1.3 million people die in road crashes each year. I’m thinking self-driving cars have better odds than a 16-year-old boy that just learned how to drive. People’s perceptions and expectations are gonna need to change. Any thoughts on that?

KH: I think this is the critical question of the age. I think you’ve hit the nail on the head. It’s all the things you’ve said. They’re paradoxical. They’re counterintuitive. It’s too bad we called it global warming as opposed as climate change. For example, I’m in Toronto and it was snowing yesterday and that might actually be normal for Toronto. Having been an American that recently moved to Canada, it certainly doesn’t feel like the roads aren’t getting warmer. That’s all our individual subjective viewpoint. We don’t naturally think in terms of probabilities across a larger space. That’s the same issue with self-driving cars as you pointed out, the technology will reduce deaths and accidents after all. It seems to be a consensus across the community and yet that knowledge, that abstract probabilistic knowledge is not as strong, it’s not as visceral as the images of seeing a car crash and we seem to hold machines to higher standards than we do humans. I think we would benefit the human race if we were to think a little bit differently and reassess what technology can and should do. Think about it not as perfect, but as fallible as we are and often replicating some of the mistakes we’ve made in the past because all these systems were trained on data. Nonetheless, if we design them right, a little bit better than we’ve done in the past.

EB: Some of us are a little more, to pull up an old Star Trek reference, more Spock guys than Kirk guys. We have logical thought processes or responses versus emotional ones to certain situations. You just look at the numbers and make a judgment based upon that versus an emotional response to things like an accident that was caused by a self-driving vehicle.

KH: Yep, for sure.

EB: So another subject, all together. We both work in the software industry and work in technology. What’s your favorite software product and why is it your favorite?

KH: I’ve found this one to be a hard question to answer. I’ve had some intuitions going into it and then I actually crowdsourced some other folks in my company to see what they thought. Multiple people said Spotify, and I was like, “Oh yes, I also love Spotify!”

I think it’s a wonderful product. It’s basically my number one tool at work. I work in an open format office. Standard. Contemporary startup deal. There’s a lot of noise and I have trouble concentrating so I need to block out the rest of the world in order to get stuff done. Without fail, I listen to this track called “Thursday Afternoon” by Brian Eno who was a minimalist composer back in the 70’s. That’s on constantly during the day. It’s this great white noise that keeps me able to concentrate at work. Alongside that, I find that their recommendation, algorithms and the new music that I discover week by week are fantastic. I think they’ve continuously added on a couple of features that make it super easy for the user to engage with the system, to make it better over time so there’s a lot of products out there that make the thumbs up and thumbs down. It’s easy to develop a habit. I know that my friend, Nir Eyal, has been one of your podcasts before and just thinking about, the excellence of simple design that nonetheless tightly aligns with function and need to make the product even better for the user. They’ve just done an extraordinary job with it. They also have some super cool machine learning stuff on the backend, they developed music library differently than Pandora and started off with more of a collaborative filtering model where it was like if Katherine listens to this and Eric listens to the same type of music, we’re gonna use those affinities to suggest the next step. They’ve been at the vanguard of applying machine learning to actually make representations of the qualities of the music itself and align those representations with users’ past history. Not only is a great and useful product, but it’s actually one of the most sophisticated uses of machine learning that I’ve seen, so yeah. Love it overall.

EB: That’s awesome. There’s a lot of comparisons now that the stock is public, too. It’s like the Netflix of music and some of them that come from the recommendations where for you; it’s based on Brian Eno’s “Thursday Afternoons.” For me right now, I think it’s Kay Flay and Saint Vincent that keeps me focused. Slightly different choices, but mine change a lot. Other words of wisdom going back to product leadership, product managers, what words of wisdom would you impart to those people?

KH: We’re hiring product managers at the company, and I’ve seen a spectrum of different candidates over the last couple of weeks. The ones that strike me as great, versus decent but not popping, are those that are super focused on details. I always like to think about product management like Sherlock Holmes where fact is stranger than fiction so speculation and what we think might be the truth tends never to be all that useful. What’s really useful is going out to the world and paying super close attention to a user and their needs and their situation and whether that fits within a business process and getting into the nitty-gritty details and that’s where the magic happens I find. So I think always going further, always pushing further, being weary of abstractions is the greatest advice I would give. It’s hard for me to take that advice, I tend to have a more abstract mind so I’m constantly fighting to bring myself down. My colleague Tyler is one of the sharpest minds I’ve seen in that way. He’s really been able to go deep in everything he considers.

EB: Love the Sherlock Holmes reference. Great work. So final question, three words to describe yourself.

KH: So I think intensely curious, always wanting to push further and learn new things, intensely self-critical, and empathetic. That’s me in a nutshell.

EB: Awesome. Well, this is great, I really enjoyed the conversation. Thanks for joining us today.

KH: Thanks so much for inviting me. I had a good time.