AI Green Irony News

AI-enabled Applicant Screening Delivered by Our MuleSoft OpenAI Connector

AI and Integration

The general release of ChatGPT in November of 2022 changed everything. Unique use cases for ChatGPT poured in every day, changing previously-held beliefs on what was possible.

Since connectivity enables everything, we knew OpenAI’s API would open up even more possibilities. So we began investing in creating our own OpenAI Connector for MuleSoft to make connectivity easy. We knew that while we were working on enabling connectivity, we’d be able to think of lots of great ways to use it when it was ready.

And we did. Below is our first organizational use case integrating into OpenAI’s services, and it’s been incredibly valuable to Green Irony already. 

The Problem

Like many technology companies, Green Irony receives a huge volume of resume submissions to open job postings. 

This can be both a blessing and a curse.

It’s a blessing because we have the reach and employment brand to attract hundreds of applicants to open positions within days of posting.

It’s a curse because someone needs to go through these resumes and whittle them down into a candidate pool for phone interviews. With techniques like keyword stuffing now combined with the likes of AI-generated resumes, properly screening resumes is getting more challenging by the day. Not only does it require a lot of person-hours to do properly, but it also requires heavy domain knowledge and experience in the roles being screened.

In short, doing it right required a lot of internal experts’ time and effort, and these experts were already very busy. We needed to get them some help.

The Solution

Fortunately, Green Irony’s R&D team, armed with the most disruptive technology we’ve ever laid eyes on, was up to the challenge.

Using Green Irony’s OpenAI Connector for MuleSoft, our team delivered a set of APIs capable of providing a candidate rating against a job post, reasoning for the rating, and a set of questions to validate the candidate and ensure gaps are filled. This is all done real-time, triggered by a webhook integration on every inbound candidate submission and performed asynchronously, writing scores and other relevant contextual information back into our Applicant Tracking System.

Real-time integration allows us to quickly alert our team of promising candidates, enabling us to get in front of them faster. Our solution was driven by a belief in the “human in the middle” concept, so our goals are to arm our team with the best information, not to disqualify candidates based on an AI-only recommendation.

The secret sauce of the solution lies in a few key areas:

  1. Easy, connector-based connectivity into OpenAI, enabling us to focus on delivering and testing what matters for the use cases
  2. Prompt engineering and A/B testing of various prompts, ensuring we mitigate any hallucination issues and receive the most accurate feedback possible about every resume
  3. Working up an accurate job posting that is very clear about what success looks like and then parsing resumes for key information that is relevant to scoring against a job posting
  4. Fine tuning of the OpenAI AI model and continuously sampling results with our human experts, ensuring alignment between human ratings and feedback and AI-generated ratings and feedback (in progress)

The Results

  • Labor Bandwidth Savings: The AI provides a sophisticated scoring system that significantly reduces the time spent on evaluating each resume, from an average of 10 minutes to mere seconds. This labor savings comes from critical resources who are very busy with other work in a fast-growing startup.
  • Increased Interview Quality: The AI system offers detailed insights on each candidates’ strong matches and gaps in relation to the job description. This has resulted in stronger selections for interviews and reduction in the number of necessary interviews in half from an average of 6 to 3. Increased quality of interviews leads to increased quality of onboarded new hires.
  • Enablement of HR Resources: The contextual information delivered by our solution enables our HR resources to ask better, more technical questions to assess the quality of candidates sooner within our process. This results in a higher quality of candidate pool reaching our second interview stage.
  • Reduction in Recruiting Timeline: The use of AI allows for swift identification and rejection of unsuitable candidates, reducing time spent in the recruitment process. This has led to a 5X+ reduction in the amount of time we spend thinning 100 applicants into 15 for phone interviews, so we’re able to talk to better candidates more quickly.

Key Takeaways for AI

If someone would’ve thought of this solution 12 months ago, our first thought would be that it would take millions of dollars worth of technology labor investment to deliver because of how time-consuming machine learning is. 

The release of GPT-3.5 shattered this point of view. We now have access to an extensible LLM capable of performing this level of expert analysis with the right prompts and tuning. We’re in uncharted territory, and it’s up to leaders to find the right tasks for AI, tasks to take off the plates of overworked humans.

To us, our own resume screening process was the perfect type of scenario for delivering value with generative AI. We had a very tedious, time-consuming task that also required experts to perform it. This task was a scale inhibitor since not only does it require so much time that an organization like ours can’t possibly keep up, it’s also VERY important to get right.

Generative AI has been a revelation for our applicant screening process, and it’s just the beginning.

API Security

API Security Best Practices: Top Defenses to Avoid Critical Security Threats

Most businesses have been hearing it for some time now: APIs are the future, APIs are the way to go, APIs or bust. The main purpose of leveraging APIs is to allow other technology systems within your business and third-party vendors to access your data and generate business logic that is utilized for generating revenue, serving customers, and much more.

What many businesses don’t know though is that API security is an essential and mandatory part of securing critical information, whether that’s financials, personal employee data, client data, and customer data. This is important because whether you are opening, sharing, changing, or pulling sensitive data, you are leaving your business wide open to security breaches.

So whenever you have all of your APIs open and available, there are immediate measures you must take to limit unwanted access to your data. Here are the top three basic API security measures you must take for threat defense.

Top 3 API Security Basics for Threat Defense

1. Two-way Encrypted Communication

To prevent any “man-in-the-middle” attacks, communications must be two-way encrypted. It’s easy for people to see data moving back and forth or hack into routers, even if only one way is encrypted. The key is to make sure that whenever you are talking, even before passing credentials, you’re doing so through protected communication. It means having SSL (Secure Sockets Layer) or TLS (Transport Layer Security) and utilizing HTTPS (Hyper Text Transfer Protocol Secure). 

2. Authentication and Authorization

Once your communication is encrypted, you now have a safe way to take and share sensitive data, such as usernames, passwords, client IDs, and secret tokens. Authentication is only the first step. This is where you have your proper credentials and a password that is more complicated than “password.” Authentication is great, but just as key is authorization, which many businesses fail to check. 

Many companies know which individuals should have access to certain data, but are their APIs checking for the same? Is the data read-only? Can changes be made? Can data be shared? Are the right people the only ones who can do all of the above? The last thing a company needs is to have open access for every employee or third-party,  no matter what level they are at. In order to prevent this, there must be proper authorization.

3. Denial-of-Service Attack Prevention

Denial-of-service attacks is when someone can take and send enough requests at you in a short amount of time. Your system will not be able to process it and is going to timeout and crash. This is when you start getting into rate-limiting and throttling policies. Both are critical in ensuring that your APIs can only process so many requests per minute. You always need to set some kind of rate limit because that’ll prevent people from just hammering down requests and forcing a security leak.

However, the above are table stakes and only the beginning. Businesses really should be reviewing the OWASP Top 10 ( security concerns and making sure their application networks are protected against attacks of all kinds. This includes data injection, using out-of-date components, missing server updates – the list goes on. Once you have that protection in place, the next step is to have a plan in place for ongoing monitoring. 

API Security Monitoring Best Practices

First is taking stock of what APIs and applications you have. Companies don’t realize that they might have a number of APIs running that nobody intended to run in the first place. Many businesses have servers with exposed APIs, third-party SaaS systems with the same, or even legacy APIs that most forgot existed. 

Best way to combat that? Find and test them. Listen to network traffic and sniff out offending systems or APIs. Even if you know an API and its specification (the communication contract), you should generate different permutations to see if there is a way to break it and get access to data you shouldn’t have. 

In essence, you want to have some type of system that can do ongoing monitoring that generates alerts and reports the speed back to you to say if everything is on lock-down and secured. Many businesses will throw their hands up at this saying they don’t have time, but API security and ongoing security monitoring are mandatory – not optional. 

If your team doesn’t have the expertise or bandwidth to ensure all of the above API security elements and have the capacity to do ongoing monitoring, our partner Noname Security specializes in this. Noname API Security Platform is the only solution to proactively secure your environment from API security vulnerabilities, misconfigurations, and design flaws while providing API attack protection with automated detection and response.

If you aren’t actively pursuing security measures, it’s only a matter of time until someone finds your company and you have a data breach. Simply put, don’t be that company.

To take the initial steps to secure your business, see our API Security Assessment & Remediation Plan offering. In just a few short weeks we’ll help identify your API security risk and provide a remediation plan to address the greatest risks to your business and technology roadmap.

Platform Migration & Modernization

Solving the Most Critical Data Problem for Insurance Carriers

In our webinar, How to Improve Insurance Risk Calculations with Data Integration, our CEO & Founder, Aaron Shook, and Director of Strategic Services, Ron Reed, review the data interactions required to estimate risk, the pitfalls of a standard approach to calculating policy risk, and the strategic solution to keep your business profitable and competitive within the marketplace.

As carriers continue to pursue their digital transformation, addressing this problem is foundational to the success of those critical investments. Here’s a high-level overview of the scenario and the importance to long-term profitability. 

The Legacy Approach to Accessing Third-Party Data is No Longer Viable

For most carriers, the policy management system is the center of their universe where all third-party data is plugged directly in via point-to-point integrations. However, this legacy integration approach for connecting data from vendors like CoreLogic, TigerRisk, and more directly into your policy management system is not scalable. 

According to Aite-Novarica, more than 75 data providers serve insurers and that list continues to grow. Yes, you heard that right— 75. 

What are some hidden business costs of the standard approach?

      • Drastically increased project timelines and risk to enhance, modify, and/or swap third-party data services
      • Larger underwriting labor cost
      • Decreased agent satisfaction
      • Increased outage times
      • Higher total cost of ownership

Why make it more challenging and take longer to swap third-party data providers? Without the ability to quickly and seamlessly access the best data to power your risk calculation, the profitability of each policy is at risk. 

How to Maximize Profitability: API-Led Integration Strategy

Operational support and flexibility are the core to any successful IT organization. So what’s the solution to help your insurance business maximize profitability and competitiveness in the marketplace? An API-led integration strategy.

By leveraging an API-led strategy:

      • Underwriting can have a more trusted relationship with IT to enable new risk model capabilities with integrations that are faster and less risk to your business
      • You control the access to third-party and internal data
      • You have the ability to scale and consolidate your risk capabilities as you add new products and offerings- enabling the utmost flexibility in all areas of your business

Solving the Most Critical Data Problem for Insurance Carriers

Your third-party data and its application must not be locked into your policy management system. Risk assessment requires data and the ability to take control of that data as new providers become available and customers demand more personalized experiences. Our on-demand webinar, How to Improve Insurance Risk Calculations with Data Integration, dives into the challenges of this scenario , and how an API-led data integration strategy can help. If you have questions or want to discuss your integration challenges, contact us today.