Categories
AI Green Irony News

AI-enabled Applicant Screening Delivered by Our MuleSoft OpenAI Connector

AI and Integration

The general release of ChatGPT in November of 2022 changed everything. Unique use cases for ChatGPT poured in every day, changing previously-held beliefs on what was possible.

Since connectivity enables everything, we knew OpenAI’s API would open up even more possibilities. So we began investing in creating our own OpenAI Connector for MuleSoft to make connectivity easy. We knew that while we were working on enabling connectivity, we’d be able to think of lots of great ways to use it when it was ready.

And we did. Below is our first organizational use case integrating into OpenAI’s services, and it’s been incredibly valuable to Green Irony already. 

The Problem

Like many technology companies, Green Irony receives a huge volume of resume submissions to open job postings. 

This can be both a blessing and a curse.

It’s a blessing because we have the reach and employment brand to attract hundreds of applicants to open positions within days of posting.

It’s a curse because someone needs to go through these resumes and whittle them down into a candidate pool for phone interviews. With techniques like keyword stuffing now combined with the likes of AI-generated resumes, properly screening resumes is getting more challenging by the day. Not only does it require a lot of person-hours to do properly, but it also requires heavy domain knowledge and experience in the roles being screened.

In short, doing it right required a lot of internal experts’ time and effort, and these experts were already very busy. We needed to get them some help.

The Solution

Fortunately, Green Irony’s R&D team, armed with the most disruptive technology we’ve ever laid eyes on, was up to the challenge.

Using Green Irony’s OpenAI Connector for MuleSoft, our team delivered a set of APIs capable of providing a candidate rating against a job post, reasoning for the rating, and a set of questions to validate the candidate and ensure gaps are filled. This is all done real-time, triggered by a webhook integration on every inbound candidate submission and performed asynchronously, writing scores and other relevant contextual information back into our Applicant Tracking System.

Real-time integration allows us to quickly alert our team of promising candidates, enabling us to get in front of them faster. Our solution was driven by a belief in the “human in the middle” concept, so our goals are to arm our team with the best information, not to disqualify candidates based on an AI-only recommendation.

The secret sauce of the solution lies in a few key areas:

  1. Easy, connector-based connectivity into OpenAI, enabling us to focus on delivering and testing what matters for the use cases
  2. Prompt engineering and A/B testing of various prompts, ensuring we mitigate any hallucination issues and receive the most accurate feedback possible about every resume
  3. Working up an accurate job posting that is very clear about what success looks like and then parsing resumes for key information that is relevant to scoring against a job posting
  4. Fine tuning of the OpenAI AI model and continuously sampling results with our human experts, ensuring alignment between human ratings and feedback and AI-generated ratings and feedback (in progress)

The Results

  • Labor Bandwidth Savings: The AI provides a sophisticated scoring system that significantly reduces the time spent on evaluating each resume, from an average of 10 minutes to mere seconds. This labor savings comes from critical resources who are very busy with other work in a fast-growing startup.
  • Increased Interview Quality: The AI system offers detailed insights on each candidates’ strong matches and gaps in relation to the job description. This has resulted in stronger selections for interviews and reduction in the number of necessary interviews in half from an average of 6 to 3. Increased quality of interviews leads to increased quality of onboarded new hires.
  • Enablement of HR Resources: The contextual information delivered by our solution enables our HR resources to ask better, more technical questions to assess the quality of candidates sooner within our process. This results in a higher quality of candidate pool reaching our second interview stage.
  • Reduction in Recruiting Timeline: The use of AI allows for swift identification and rejection of unsuitable candidates, reducing time spent in the recruitment process. This has led to a 5X+ reduction in the amount of time we spend thinning 100 applicants into 15 for phone interviews, so we’re able to talk to better candidates more quickly.

Key Takeaways for AI

If someone would’ve thought of this solution 12 months ago, our first thought would be that it would take millions of dollars worth of technology labor investment to deliver because of how time-consuming machine learning is. 

The release of GPT-3.5 shattered this point of view. We now have access to an extensible LLM capable of performing this level of expert analysis with the right prompts and tuning. We’re in uncharted territory, and it’s up to leaders to find the right tasks for AI, tasks to take off the plates of overworked humans.

To us, our own resume screening process was the perfect type of scenario for delivering value with generative AI. We had a very tedious, time-consuming task that also required experts to perform it. This task was a scale inhibitor since not only does it require so much time that an organization like ours can’t possibly keep up, it’s also VERY important to get right.

Generative AI has been a revelation for our applicant screening process, and it’s just the beginning.

Categories
AI Digital Transformation

It’s Groundhog Day: Lack of an Integration Strategy Will Inhibit AI, Too

Introduction

The undisputed hottest topic in technology now and for the near future is LLMs and Generative AI. There’s a lot of talk about how to design prompts and get ChatGPT4 to do cool things for you, but not a lot about how to get value across an entire enterprise. 

That’s what we’ve been up to with our R&D team at Green Irony.

Killer Use Cases Beyond the Imagination

Today’s Generative AI capabilities unlock high-ROI use cases that were unthinkable a year ago. Green Irony’s R&D team has many in the works that we can’t wait to show you. For now, imagine the following scenario in the Travel & Hospitality industry:

A business traveler named Chris is currently in Dallas for some client meetings. It’s summertime and not only is there significant demand for air travel but there is also severe weather including tornadoes, hail, and severe thunderstorms wreaking havoc on the major airline hubs of the midwest. This passenger has a family emergency and needs to get back to Tampa, FL as soon as possible, no matter the cost.

Prior to November 2022’s release of ChatGPT, this would have required a sophisticated travel agent with expert knowledge of flight and rental car booking systems and lots of time with Google Maps and other utilities. Possibly even a spreadsheet, and who wants to use one of those?

What if instead Chris could just say this to an AI service?

“Give me your top 3 options for getting me from my current location to Tampa, FL, by 5 PM tonight. Car travel should be limited to 120 miles or less. Optimize first for arrival time, then for premium seat availability, then for driving distance. Do not risk any connecting flights in the midwest.”

So what’s stopping us from doing this RIGHT now?

Hint: It’s the same thing that always stops us, and MuleSoft Founder Ross Mason was prescient enough to warn us about it for AI in 2018.

Last decade’s disruptor: Mobile and Tablet Devices

2013 saw a huge shift of web traffic from desktop to mobile and this was incredibly impactful to businesses, especially those like retail that rely on web funnel conversion metrics as key business drivers. Mobile was arguably the first major disruptor that truly exposed the perils of siloed enterprise technology systems and lack of a clear, scalable integration strategy. 

I remember seeing it first hand. As my team and I worked with retailer HHGregg to deliver the #1 mobile experience in their space, what do you think our biggest inhibitor was to providing this exceptional customer experience?

If you said “integration”, you’re catching on to the theme of this blog. Attempting to get the right data to drive the right mobile user interactions and respond to these user actions within systems of record was by far our biggest inhibitor. Despite the fact that web browsers and JavaScript frameworks to drive the right mobile behaviors were still considered “cutting-edge” back then (Angular 1.2 was released November 15, 2013!), they were still the easy part. 

Integration was our clear-cut biggest hurdle to creating an award-winning e-commerce mobile experience for our partners at HHGregg.

Today’s Landscape for Scalable Integration

History again repeats itself, but this time with much higher stakes. Insert your preferred doom and gloom messaging about AI and existential risks to the viability of businesses and a high-stakes, zero-sum game arms race here.

The needle simply has not moved fast enough for most organizations over the past decade in terms of scalable integration. Most of them now are stuck with Ross Mason’s “big ball of mud” and every project, big or small, becomes bogged down by this situation. 

Just about every conversation I have with customers about a failed Salesforce implementation comes down to one thing: lack of access to data and business capabilities in other systems. Most of the time, in-house IT teams have built some Frankenstein’s Monster set of data synchronization capabilities that fail to scale with the purchase of new systems and software, evolution of business processes, and marketplace disruptions due to new cutting-edge technologies. In an ironic twist of fate, their Salesforce implementation team tends to spend their time attempting to solve systems integration problems that are not solvable within the Salesforce platform itself.  

These Salesforce implementations are inhibited by the same challenges we saw with mobile e-commerce websites in 2013.

AI will be no different, because today’s LLM’s like OpenAI’s GPT4 heavily reward enterprises with composable integration strategies.

Anybody Heard of This ChatGPT Thing?

Enter the most disruptive technology any of us will ever see in our lifetimes, a technology disruptive enough to disrupt even Google’s core business

Clearly, it’s important for businesses to get this right, and the sooner they get it right, the greater their competitive advantage will be. But without a scalable, composable API strategy that unlocks the capabilities of the business for consumption, broader AI efforts won’t get off the ground. 

In the aforementioned article from Mason, he wrote the following:

“Organizations first need to become more composable, building a connected nervous system called an application network, which enables AI to plug in and out of any data source or capability that can provide or consume the intelligence it creates. The point-to-point integrations of the past won’t be practicable in the AI-world, where things can change in an instant. Instead, organizations need a much more fluid approach that allows them to decouple very complex systems and turn their technology components into flexible pieces.”

Our R&D efforts with OpenAI, which we plan to showcase extensively in the coming weeks, have reinforced Mason’s point of view on this topic. GPT-4, the engine behind ChatGPT, can generate fantastic answers to questions that rely on generally available, non-specialist knowledge. In order to equip it with Enterprise-wide capabilities, we need composable APIs that allow us readily-available access to business capabilities so that GPT can solve real, enterprise-level problems. 

It’s Groundhog Day and we’ve seen this all before. Every innovative breakthrough levies additional burdens on an IT organization’s integration strategy. With the game-changing nature of AI, composable integration is needed now more than ever before.

We’ve Got So Much More to Say On This Topic

We hope you enjoyed the first post in this series. At Green Irony, scalable integration is in our DNA and we are incredibly passionate about this topic and the benefits it provides to our customers. Future posts will cover:

  • A real-life, in-production application that Green Irony has used to exponentially scale its candidate interview process and save hundreds of hours across key members of staff while increasing its applicant quality.
  • A prototype for the Travel & Hospitality industry that unlocks a vastly superior way to purchase products and services in this industry.
  • Several deep dive spotlights into the engineering behind both solutions, showcasing how the Green Irony OpenAI Connector can be used broadly for any OpenAI use case.
  • A comprehensive white paper on why composable integration is an absolute must for companies who are serious about leveraging AI that builds upon the concepts in Ross Mason’s 2018 blog.

Supercharge your AI readiness by combining our MuleSoft OpenAI Connector with an AI-ready integration strategy.