All Episodes CEO & Co-Founder Josh Millet
Josh Millet

CEO & Co-Founder Josh Millet

Science-based assessment tools

Josh Millet, CEO & Co-Founder of Criteria, breaks down our old, outdated ideas of standard hiring processes and explains the importance of using newer and improved science-based assessment tools. Josh founded his company, Criteria, 15 years ago and uses his experience in software development and organizational psychology to delve into the flaws of our hiring systems.

Josh highlights the gap between the scientific evidence and the indexing measures that organizations are using to hire their employees and gives some insight into how to minimize this gap. We gain an understanding that, although experience and academic achievement seem related to success, they should not be the only factors influencing your ability to get hired.

In order to implement this process, organizations need to rethink the boundaries of their talent pool and use new methods to filter potential employees. This episode is helpful for both the employee and employer, delving into how to be better prepared for the hiring process and how to implement this process for better business outcomes!

Related: 

Episode Transcript

[INTRODUCTION]

[0:00:06.1] RS: Welcome to Talk Talent to Me. A podcast featuring the most elite talent leaders on the frontline’s modern recruitment.

[0:00:12.8] FEMALE: We actually want to understand the themes of someone’s life, we want to understand how they make decisions. Where are they willing to take risks and what it looks like when they fail.

[0:00:22.7] RS: No holds barred, completely off-the-cuff interviews with directors of recruitment, VPs of global talent, CHROs, and everyone in between.

[0:00:31.1] FEMALE: Once I went through the classes and the trainings and got the certifications through diversity and inclusion, I still felt like something was missing.

[0:00:39.7] MALE: Talent acquisition, it’s a fantastic career, you are trusted by the organization, you get to work with the C-Suite and the security at the front desk and everybody in between and everybody knows you.

[0:00:53.0] RS: I’m your host, Rob Stevenson and you’re about to hear the best in the biz, Talk Talent to Me.

[INTERVIEW]

[0:00:59.9] RS: Joining me today on Talk Talent to Me is the founder and CEO of Criteria, Josh Millet. Josh, welcome to the podcast, how are you today?

[0:01:07.3] JM: Good, thanks for having me Rob, happy to be here.

[0:01:09.8] RS: I’m so pleased to have you. I just want a quick shoutout to the reason we’re doing this is, my cousin whose wedding I officiated works for Criteria and she texted me one day and she said, “How do I go about getting my CEO on your company?” And I was like, “You text me and you ask me and I say yes” That’s the answer. So here we are, recording. Shout out to Meg who is by all accounts a rockstar, am I right, Josh?

[0:01:32.0] JM: That’s right, Meg’s been great, she’s been working with us around four years now. It’s been wonderful.

[0:01:37.6] RS: So, shout out to Meg and really glad to have you here, Josh. Would you mind, for the folks at home, sharing a little bit about your background and how you came to found Criteria all the way back in ‘06?

[0:01:49.7] JM: Yeah. So, my name’s Josh Millet and I’m the cofounder of Criteria, we founded it around 15 years ago, as you mentioned. I actually had a pretty circuitous route into HR software and entrepreneurship in general. Spent my 20s doing a PHD in medieval French history of all things. So not exactly a direct route into HR tech.

[0:02:11.5] RS: Classic entrepreneur tale, right?

[0:02:13.2] JM: Yeah, everyone’s got some kind of story. I ended up right out of grad school doing a very small startup that was also was actually an educational technology company and we sold it rather quickly. We were only a five person company at the time and so, I moved out to LA, started working for the company that had acquired us and got involved in hiring and the origin story of Criteria is in that period of being involved in hiring in those years.

I was really unqualified for the task at the time. I have never done it and I remember being involved in a lot of interviews and then a bunch of them and one in particular I remember looking up at the clock on the wall, kind of like seven minutes into the hour, which has never a good sign when you’re doing that.

Just realizing that the fit was really not good on either side and that got me thinking about ways that organizations could use more science-based evidence-based techniques for assessing talent to make better talent decisions essentially.

[0:03:12.2] RS: So, where did you start when you – you’re like, “Let’s rope in some research, let’s figure out what’s actually happening in this room and try to productize it” where did you begin?

[0:03:20.1] JM: Yeah, so, back then, I mean, I begun by – and this is probably not how you do it these days but actually wrote a business plan. These days it would just be probably a pitch deck but you know, I had some friends from grad school who are actually psychologists and so I was pretty familiar with the world of assessments through them. I had learned a lot from them.

So I got increasingly interested in kind of the field of organizational psychology and the best practices from that and the idea was to productize some of those and provide, especially smaller and medium sized businesses that at that time, really didn’t have access to this kind of tool.

Assessments are something that a lot of Fortune 500 companies have been using for decades but they weren’t those kinds of tools, weren’t as accessible to maybe smaller and medium sized businesses.

So, that was the initial goal, was to kind of increase access to this kind of science-based tool for smaller businesses. Since then, of course, we’ve evolved to work with bigger enterprises ourselves but the origin was really in kind of those small and mid-sized companies.

[0:04:21.2] RS: At that time, when those assessments were the domain of the larger enterprise companies who could afford them, were they accurate, were they meaningful or were they also a sign of the times?

[0:04:32.6] JM: I think, so assessment science has evolved a lot in the last 28 plus years but there’s actually a pretty broad consensus now about some of the basic things that work and don’t work in hiring and so, the issue that we see is that even though in IO psych circles, in organizational psych circles and in the best companies of the world like the Google’s, the best employers in the world, they’re really basically more or less following the science of best practices but the vast majority of companies are really not yet and just by sometimes, their best efforts, they aren’t using science-based techniques and they’re using things that aren’t delivering good outcomes and that are also frankly pretty bad from a diversity angle.

So really, it’s a situation where through our product, we want to help companies catch up to where the science is and has been for frankly, for a while now. I won’t say that like hiring science of people sciences like fully settled but there’s a lot of things that are widely agreed on, there’s pretty broad consensus and yet, a really small number of organizations, unfortunately, are following those best practices.

[0:05:40.4] RS: Let’s beat them up and send them the ambulance, Josh. What is the gap between what the science suggest and what you see most companies practicing?

[0:05:48.4] JM: Yeah, great question. With ours, I’ll talk about the two areas, assessments and interviews, right? In terms of what most companies use to hire, if you look at what’s ubiquitous in hiring, right? Everyone’s doing, looking at resumes pretty much or in some cases, like applications and everyone’s doing interviews on kind or another, right?

If you take resumes, if you think about what you get as an employer from a resume, it’s really to kind of talent signals that you’re getting, right? You’re getting some sense of an applicant’s level of experience in a given field and then you’re also maybe getting a sense of like, their educational pedigree, right? Whether they have a four-year college degree, that kind of thing.

The research shows pretty decisively that those are both pretty weak talent signals. They have some relationship to subsequent job performance but it’s a pretty weak correlation and yet, they’re really ubiquitous in terms of everyone indexes off them. With experience in particular, right? If you just think about it, demanding experience, especially for like entry and mid-level type roles is really kind of systematically excluding early career folks from a lot of roles, right?

It’s one reason the unemployment rate for under 25 year old’s, this is almost three times as high as the general population, right? I think over indexing on experience is the bigger, the biggest mistake that companies make I would say, as a general rule. Same thing with educational pedigree, it has some correlation to subsequent success or subsequent work performance and it’s not all that strong a signal.

If you look at things that we know are stronger signals, things like aptitude, critical thinking, problem-solving, conscientiousness and work ethic, those are much better talent singles and so that’s what assessments, our assessments in particular, and other assessments are really after is, are looking for those really strong talent signals that we know to be predictive of outcomes and measuring those in an objective and standardized way.

Just to skip over to the second example, if you look at interviews, you know, there’s a ton of research that shows that unstructured interviews, which is what people do, are much less effective than structured interviews and the structured interview, I’m sure a lot of your audience would know this but a structured interview is one where applicants are asked, you know, mostly the same questions and graded or scored in a rigorous way based on predetermined rubric.

Those are so much more effective in terms of producing outcome and also reducing bias than unstructured interviews, where there’s a lot of research that shows that unfortunately, interviewers being humans, largely make up their mind in the first five minutes of an interview and it’s hard once you’ve anchored around an impression of a candidate to kind of get off that impression even if there’s subsequent evidence that maybe you should.

[0:08:40.6] RS: Can we have a little bit of a history lesson here on how companies got to this point where evaluation when an assessment was just frankly, inaccurate and you know, take us all the way back to medieval France if you can. So how did we wind up with these flawed views of what would make someone good for the job and the examples you gave of education and previous experience, were those just obvious shortcuts or why do we index on those for so long?

[0:09:10.3] RS: Yeah, that’s a great question and I think I read somewhere that the resume goes back to Chinese emperor’s days, so even longer than medieval France, has essentially like a list of credentials but I think like a lot of enduring myths, there’s like a kernel of truth here.

So you know, when they say that experience is overrated, there are obviously exceptions to that, right? If you’re talking about like highly specialized roles or roles that required deep domain expertise, quite obviously, experience is important there, right? Like you or I would not be comfortable like trusting a rookie brain surgeon to do an operation on us or if we needed an employment lawyer, we’d want someone who had experience in that particular field, not our aunt or uncle who was just a lawyer of another kind, right?

So in really specialized fields, obviously experience and domain knowledge can be highly relevant and highly predictive but if you think about where hiring happens at scale, in the US and elsewhere, it’s not really in those very highly specialized roles and so, the rules for hiring brain surgeons should not be applied to hiring at large because for the vast majority of roles where the action is in hiring, you know, so things like administrative roles, sales roles, customer service or customer success roles, even something I would say, specialize the software engineering which is a pretty specialized field, the preponderance of evidence really shows that a better way to predict performance and measure potential is to look for those stronger talent signals rather than indexing heavily off the weaker ones like years of experience and educational pedigree.

[0:10:55.7] JM: It behooves one I think to remember how work is actually done once you’re in the office or once you’re on a team and the amount of times that you can say to a boss for example, “I don’t know, I’ll look it up and get back to you” or “I’ll figure it out and get back to you.” You can’t say that every time but academia doesn’t prepare you for that reality.

Like academia prepares you to either you have the answer right away or you fail, you know? It prepares you to perform rather than to adapt. Even for you as a founder and an entrepreneur. I’m sure, like all of that so much time is spent in like, here’s a problem I didn’t know how to solve until I came to it, does that make me ill disposed for this role? Of course not.

I think, software engineers too, even to have to go cases would confess to how much time they spend on Stack Overflow and GitHub looking for someone else to explain the answer to the problem that they’re experiencing and so it seems really obvious when we put it this way but the idea of making the interview process look like the way work is done, feels more accurate.

So, what does that look like that in an interview? Is it you index on someone’s ability to Google really well or how do you recreate the work environment in an interview?

[0:12:05.9] JM: Yeah, it’s a great question, lots to unpack there. I think that and I don’t – by the way, I don’t want to be seen as like, I think education is really important and a company should value it, it’s really valuable but I think it’s too often used as like a proxy for those other things that you’re talking about and sometimes it’s an okay proxy and sometimes not.

I was speaking to a group of fellow CEOs one time and someone said, “Well, what are you talking about? We hire lots of CS people from Stanford and they’re all great” and then, I said “Yeah, that is great” and that’s probably a really reliable credential, right? But, unless you’re – or even if you’re like the Google’s and Facebook’s of the world, you probably have to go beyond that little narrow pool to find talent, right?

So, it’s not that if you are hiring people from Princeton or Carnegie Mellon or MIT, you’re not going to have good success, it’s that the inverse isn’t true, right? There’s a ton of talent that didn’t go to Carnegie Mellon or Princeton and so in all fields, right? So especially in the tighter labor market like the one we’re in, our tools are really all about surfacing talent that you might not otherwise have considered, right?

So that’s where things like assessments that measure these long-term predictors can be much more effective than a resume, which is really problematic both from an effectiveness standpoint and from a diversity standpoint because if you think about those two things we mentioned, experience and education, educational level, just think for a second about how underserved populations just haven’t historically had the same degree of employment access, haven’t had the same educational access.

So not only are you looking at weak signals when you focus on those two things but you’re also looking at things that are going to get you a less diverse employee base than probably a lot of companies want these days. So they’re kind or bad from two different lenses and to the second part of your question, I do think like, intellectual curiosity, learning ability, these are things that almost without exception, serve you well in lots of fields, right?

Certainly lots of mid and upper-level fields and I think, they’re like the continuous learner, the one who really is persistent about improving themselves and acquiring knowledge is a really incredible profile to hire for. We have a story internally from criteria where you know, the common approach with hiring engineers is often to look at how well versed they are in the languages that your technology stack is written in for example.

And we have in our customer base, technology is the biggest vertical so we have a lot of these customers and a lot of our customers even will do that. We did a rerelease of our products, think about three years ago and we realized that the engineering team which at the time was I think eight or 10 people that were working on it, that none of them when we hired them knew the language that we ended up developing the new site and, right?

In some cases, it didn’t exist when we hired them. And so technology language is programming languages change so quickly and there’s such velocity there that often, you’re much better off hiring people who are quick learners because I can promise you, almost without hesitation that two or three years from now, we’ll have a different set of languages we’re using than the current set. There might be some commonalities but there certainly will be differences.

So you’re much better off assessing an engineer’s learning ability than problem solving and the velocity at which they learn, than you are in knowing, “Okay, how good are they at Java today?”

[0:15:39.6] RS: Yeah, that’s such a good point, it feels like the ability to learn is maybe the only meaningful one, right? I would say like cultural things, outside of your ability to work within a team and be a good teammate and a good coworker, that sort of thing. If you can learn, if you can teach yourself or if you can find resources, then you can learn anything, you know?

I believe that, I don’t believe there’s any – that there’s any domain of knowledge that’s beyond a sufficiently smart person. Like yeah obviously, you can’t just learn to be a doctor on YouTube, right? But you can learn to be a software engineer, you can learn to be a marketer, you could certainly learn to be a podcaster.

[0:16:18.3] JM: That’s right and I think that’s why learning ability and appetite for learning are such critical long-term predictors, right? They really apply across so many fields and what the research shows us is that the more complex the role is, the more important learning ability is, right? So the higher you get up in the sort of occupational scale, the more critical thinking and problem-solving, and learning ability turn out to be critical predictors of success.

[0:16:46.9] RS: Yeah, because the more specialized things tend to change as well, right? Or tend to change just as quickly, right? Because that is where innovation is happening.

[0:16:54.6] JM: That’s right and the more complex a role is, the more problem solving it requires. So problem-solving logically becomes a bigger part, a bigger determinant of success or failure in a role.

[0:17:05.1] RS: I’m curious how this works at the screening phase because say we move beyond indexing or over-indexing anyway on previous experience and education, which by the way that makes up like 90% of a resume, how do you screen someone when 90% of the resume is something that we’re seeing as over-indexed upon?

[0:17:24.9] JM: Yeah, it’s a good question. We’re not so naïve as to believe that resumes are just going to disappear because we and other companies are reporting out their shortcomings, right? I think they do serve their purpose in some ways and they’re probably going to be around for a very long time as our interviews, which is why interviews could be great predictors if you do them in a structured way.

So we are also getting into building tools that help companies do that but as far as you mentioned like screening at the what we call like the top of the funnel, you know, what a lot of our customers are doing is at the same time as they are gathering a resume or in some cases, collecting an application from a candidate, which is often like the first touch, right? With the talent acquisition group.

You know, they’ll collect the resume from a job application or job posting. At the same time or immediately after that, they may choose to give one of our assessments or a couple of our assessments and that way, you know, you get this objective predictive data for your whole candidate pool or for the vast majority of it. Maybe there is like some basic qualifier like, do you have the requisite kind of driver’s license if you’re hiring a driver or do you live within 50 miles from the office if they have to be onsite or whatever.

But basically, getting these signals on your whole talent pool really allows you to use this objective predictive data as a counterweight to the resumes, which is a little bit of a weaker signal and in fact what some of our customers do is they’ll actually use the assessment data to figure out which resumes to read first, right? Or where to focus their initial energies because all of our customers, and anyone who hires actually, faces the dilemma of okay, you’ve got a lot, hopefully, a lot of applicants at the top of the funnel.

How do you get to a smaller group that you are really going to spend time through a phone screen or through an in-person or video-based interview, whatever the next step is further along the funnel, how do you get from that bigger group to that smaller group? You know sometimes, we get comments and objections from people who haven’t used assessments to say, “Well, I might never have hired Bob if I used the assessments at the top of the funnel.”

The thing that they don’t often realized is they’re using something, right? They’ve got to get from a bigger group to a smaller group. If you get 50 to a 100 applicants for a role, no one is interviewing 50 applicants, right? At least that I know of and so, how do you get to a much smaller group where you can really drill down and double click on them on the things that matter to your organization.

We think it’s much better to use stronger signals that are links to data, links to outcomes then to use weaker signals like those you get from a resume that also introduce a ton of extra bias and subjectivity into the process and so, that’s really how we position assessments as a tool that you can use at the top of the funnel, which is a little bit different than it was 15 years ago. You know, when everyone was testing applicants or the big companies that were using assessments, when we got started a lot of them were doing it sort of later in the funnel.

Part of the reason for that was an economic one like a lot of companies were charging per assessment, right? Assessments tended to be longer, an hour long, an hour and a half long. Now, we actually pioneered a model of just charging an unlimited use subscription based model and the assessments that we’ve developed tend to be shorter, so they tend to be 15 or 20 minutes if people or customers use multiple ones.

Maybe even give as long as 30 or 40 minutes, so they can be used top of funnel without breaking the bank and annoying applicants by giving them like a 90-minute assessment, which would not be a good candidate experience probably.

[0:21:04.4] RS: Right, exactly. Yeah, it is well pointed out that it is a discriminatory process, right? You said you are not interviewing 50 people if you get 50 applicants and you know, please remove some negative connotation with the term discriminatory but you’re right, there are like, you can look at a resume even if it is completely hopefully blind to someone’s ethnicity and it is not discriminating in some for ratio or ethnic or whatever purposes and still decide immediately they are not right for the job.

So that process has to happen, how do you ensure that it is not or that’s being done in a biased free way?

[0:21:41.8] JM: Yeah, I think so there’s a lot of ways that bias creeps in even at the resume stage, bias and subjectivity and certainly in interviews we know from all the research that the greatest advantage of doing structured interviewing is that you make an interview more like an assessment, right? You make it pretty standardized for everyone so everyone gets the same questions or a common core of questions.

There might be additional follow ups or whatever and they’re scored on them in a predetermined way, so that a company for example, when you are hiring a sales person, you might say, “Okay, these are the four qualities that we really want in sales candidates or in our sales people that are linked to success and hopefully based on a competency framework and all that stuff. So each of the four should be worth 25% of the interview” or whatever.

That way, you reduce the tendency to be really swayed by a candidate who’s great in one area for example, who is really charismatic or really persuasive but in the other three areas for example, with communication or whatever the other three are, might score very poorly. Well, if you score it in a rigorous way that sort of outsized trait, right? I think I’ve heard it call like the halo and horns effect, right?

You don’t allow one trait good or bad to really sway your opinion too much because you have decided ahead of time it should only count for 25% of the evaluation. That’s just a basic example. So I think the point about all the research on unconscious bias in hiring is that everyone has it. It’s part of how human cognition works, right? So it’s really unrealistic to eliminate it all together so we talk about bias reduction.

The nice thing about assessments and structured interviews is that they allow you, they essentially establish rules and objectivity that allows you to make more informed talent decisions that aren’t swayed by things that really aren’t predictive of success.

[0:23:46.6] RS: You mentioned that we can’t expect to completely remove bias there. Unfortunately, it’s a natural part like everyone has them, they’re not indicative of you being a bad person necessarily. They are if you don’t address them, visit them and investigate. Where do you think is the balance between productizing empathy, productizing equity, and educating people to be better interviewers and to investigate their own tendencies toward bias?

[0:24:16.5] JM: Yeah, that is a really thoughtful question. I think we kind of have taken a middle road there in terms of we would describe like legacy hiring, which is based in resumes and non-structured interviews that the mode of hiring that’s really prevailed for the last probably 70 years as very full of bias, very full of subjectivity and we believe very strongly that using evidence-based techniques and data to inform your hiring process is a better way to do it.

It produces better outcomes, creates more diverse workforces, all of that but there is a further step that we don’t subscribe to on the other end of the spectrum, which is some of these companies that I describe it as like pure AI that are taking these HR technology companies that are using data and essentially chasing correlations wherever the data goes, right? I think that can really be dangerous.

We don’t want to take the human element out of the hiring process but what we do want to do is make sure that the humans in talent acquisition or in HR are making more informed, more data-driven decisions but ultimately, it’s still their decision. So you know, you read a lot with some of these HR technologies, some of the newer ones about like hiring by algorithm. We don’t go that far like obviously, our assessments are based on data-driven techniques and evidence-based techniques.

But we really feel like no customer of ours should be outsourcing their hiring decision entirely to us. They know their business better than we do invariably, right? So what we want to do is earn them with data to make better, more confident, more quick decisions about their talent acquisition process not to tell them, “These are the people you should hire” without any context or transparency into how those decisions are made.

[0:26:14.6] RS: Got it. Assuming that this manner of assessment takes hold, say it becomes a default that companies are exhibiting this in all of their interviews, how will that change the way candidates present themselves for roles or seek to engage with talent or sorry, seek to engage with recruiters?

[0:26:32.4] JM: Yeah, that’s a really great question and actually I believe next month or four to six weeks from now, we’re releasing our first-ever candidate experience survey. So it is really interesting that you asked that question because I think there is a change coming around, where that we are seeing in the marketplace where candidates are beginning to recognize some of the issues with the way hiring has traditionally been practiced.

So it’s very easy to send your resume off into the ATS of this, right? To one-click apply with your resume and be done and never hear back again sometimes, right? Or often so that’s easy, it is pretty frictionless. That is one reason why even in a tough market or a tight labor market, tough for employers, good for candidates, our customers are getting more applicants than ever for a lot of roles, not for all roles but for a lot of roles.

Why is that happening? Partly because it is so easy to apply, right? It’s really become frictionless too or almost frictionless not only to apply to a job but increasingly in a remote environment to switch jobs, right? So that has really created a new dilemma of, “Okay, how do you get from all of these applicants to a smaller group?” I think it’s really interesting that candidates are beginning to feel and we’ve got some evidence of this from the candidate experience survey that I just mentioned.

They are beginning to feel that assessments in other, let’s call them non-traditional ways of evaluating them, provide them with a better chance for kind of showcasing their talent. I think that is especially true in some of those pockets of the labor market where people really are excluded because of a lack of experience. You think about people who are under 25 or early career people or you think about people who wake up ten years into their career and decide, “You know what, I don’t like what I’m doing. I want to do something completely different.”

Well then, the experience they have accumulated effectively counts for nothing, right? Because they are trying to move into a different field. We have a lot of those stories from our own company, from criteria, right? Where our director of engineering for example in North America, when he applied to criteria had been a banker for ten years, right? He was a branch manager of a bank and just decided he wanted to do something different.

So went to school for programming but all that prior experience, which was in a good job really wasn’t relevant anymore, so we weren’t relying on that. We didn’t count that for much when we hired them, right? We looked at other things and so I think it’s really interesting in those sort of parts of the labor market that really are disproportionately excluded from employment for various reasons.

There is a real openness to more evidence based techniques. Another example is like neuro-diverse candidates with interviews, right? Interviews are terrible for neuro-diverse candidates as a general rule. So how do you surface that talent in alternate ways, right? Rather than through unstructured interviews.

[0:29:40.3] RS: Yeah, makes sense. Josh, we are creeping up on optimal podcast length here. Before I let you go, I want to ask you to share some advice to the talent acquiring folks out there in podcast land, what can they do to ensure that they are assessing talent on a meaningful level and not over indexing on areas that maybe exclude people who’d be good at doing the job from getting it, besides of course, scheduling a demo with criteria?

[0:30:06.3] JM: Well obviously, that’s the first step, recommendation but yeah, I think that’s a great question. So what I would say is be intentional about every part of your talent acquisition process that involves making an evaluation of some kind and so as we kind of talked about earlier, we’re not recommending that people stop using resumes all of a sudden but recognize their limitations and think about and try to quantify the talent signals that you’re getting, right?

So if you’re looking at, if you are demanding a four-year degree for example in a job description, why are you doing that? Do you have evidence that it is linked to greater success in the role? If not, get rid of it, right? Do you have evidence that asking for three years’ experience for an entry level role produces better candidates, produces better job performance or does it just systematically exclude people that might be good in the role?

So if you are going to do those things, if you are going to still use some of that legacy hiring framework, make sure that it is evidence-based and if you have evidence, if you have data that suggest that a minimum years of experience is predictive of success or that the college degree is predictive then by all means, keep it but if not, you should be removing those parts of your job description or parts of your job posting.

You will open up your talent funnel and use less traditional tools, more evidence-based, more data-driven tools to help uncover some of those same things you are trying to get at with the experience and education requirements but which will turn out to be more predictive and will give you a broader applicant pool to find your talent in.

[0:31:48.2] RS: Josh, that’s fantastic advice and this has been a fantastic episode. Thank you so much for being with me here today.

[0:31:53.3] JM: Thanks for having me Rob, this was really fun.

[END OF INTERVIEW]

[0:31:57.5] RS: Talk Talent to Me is brought to you by Hired. Hired empowers connections by matching the world’s most innovative companies with ambitious tech and sales candidates. With Hired, candidates and the companies have visibility into salary offers, competing opportunities, and job details. Hired’s unique offering includes customized assessments and salary bias alerts to help remove unconscious bias when hiring. By combining technology and human touch, our goal is to provide transparency in the recruiting process and empower each of our partners to employ their potential and keep their talent pipeline full.

To learn more about how we can help you find your next great hire, head to hired.com/tt2m.

[END]