Mintz on Air: Practical Policies - Can AI Make Good Decisions in the Workplace?
In the latest episode of the Mintz on Air: Practical Policies podcast, Member Jen Rubin hosts a conversation on the role of AI in human resources decision making in the workplace. This episode is part of a series of conversations designed to help employers navigate workplace changes and understand general legal considerations.
Jen is joined by Member Andrew Matzkin, to discuss:
- The Biden and Trump Executive Orders on AI
- AI, Human Decision Making, and Manager Involvement
- Bias and Trust in Workplace Decision Making
- Key Takeaways and Future Considerations
Listen for insights on how AI could impact employment decisions, where human judgment remains critical, and what employers should keep in mind as they consider integrating AI into workplace processes.
Can AI Make Good Decisions in the Workplace? - Transcript
Jen Rubin (JR): Welcome to Mintz On Air, the Practical Policies podcast. Today's topic, Can AI Make Good Decisions in the Workplace? I'm Jen Rubin, a Member of the Mintz Employment Group with the San Diego-based Bicoastal Employment Practice representing management, executives, and corporate boards.
Thank you for joining our Mintz On Air podcast, and I hope you have had the opportunity to tune into our programs previously. If you have, you know that my guests and I have been discussing a variety of employment related topics and developments. If you have not tuned into our previous podcasts and would like to access our content, please visit us at the Insights page at Mintz.com.
Today, I'm happy to be joined by my partner, Andrew Matzkin, who is also an employment lawyer, but who is based in the firm's Boston office. Drew is an employment counselor and litigator and advises his clients on all aspects of federal and state employment matters including discrimination and harassment, workplace health and safety, employment handbooks, employment agreements and separation agreements. Hello, Drew. Thanks for joining Mintz On Air.
Andrew Matzkin (AM): Hi Jen, glad to be here and thanks for that great introduction.
(JR): I would like to chat with you today about a topic that's been on my mind as an employment lawyer, and I'm sure on yours as well: What is the role, if any, of AI in human resources decision making in the workplace? What I'd like to explore with you today are any concerns about legal liability relating to the use of AI in the workplace.
You know everyone is talking about it and many people are as excited about it as they are worried about AI's role in human resources management. Let's be honest, what we're talking about is turning over human thinking or the lack thereof to a machine. Before we dive into the mechanical impact of AI decision making relating to human resources in the workplace, I want to divert briefly to the executive order that the Trump administration issued recently concerning AI in the workplace. Can you tell us what that executive order said?
The Biden and Trump Executive Orders on AI
(AM): Sure, I’m glad to jump in and I think you agree with me, Jen, that we're just waiting for the Trump administration to finally act on something. It's been boring. It's been just so quiet, and the lack of activity has been concerning to both you and me. Hopefully, at some point they do something. It's a step that may have gotten lost among the many steps that the administration's taken recently, as it's just been such rapid fire, both as it relates to the employment context and a number of other areas.
If we start this story maybe a couple of years ago with the Biden executive order on AI, which focused on ensuring that as companies continue to work with AI, invest in AI, and improve AI, there are specific safeguards that are implemented. These safeguards might relate to cybersecurity or ensuring that the facts that are used or the output is accurate. Or they might be related to ensuring that as AI is used in the employment context, it's done so in a way that's consistent with and reflective of our anti-discrimination laws and consistent with civil rights protections. These are key points from the Biden administration.
I suppose not surprisingly, when the Trump administration came in one of their executive orders immediately rescinded the Biden executive order and essentially focused on the idea, consistent, I would say with the Trump administration's view on a lot of businesses, of allowing these businesses to run as they want, how they want to go about their business in a way that complies with the law — what's in writing, what's in a statute — but has no other requirement applied to it.
It stripped back all of Biden's executive order requirements on AI and focused on sort of this unfettered ability to invest and conduct that business as you would like, again, provided that you, at a minimum, comply with any laws that are currently on the books.
(JR): They also took down some guidance about AI from the EEOC and DOL websites, correct?
(AM): They did. Those were removed as well, stripped right off the bat. And I will note when the Biden executive order was issued on AI, you could see as an impartial observer looking at what's happened now where supporters of the Trump administration would say, “Well, see, there is this approach that is exactly what we don't like. We've got AI. It’s got this great use and now you're applying, and you have to scroll down and scroll down to get through the Biden EO with all of these new requirements and limitations on it.” And you could see if you're a Biden supporter saying, “That's right. There's some great power here, but it could be used in ways that we're concerned with.”
We do think that it's fair to do it in any event. I don't think it came as a surprise, certainly to you and me, but maybe not to other observers, that the Trump administration would scrap that and any related statements that would have been at the EEOC or DOL.
AI, Human Decision Making, and Manager Involvement
(JR): Without question. In fact, relating to the EEOC and DOL removal of the guidance that had been issued by the Biden administration, as well as others previous to President Biden, I do want to turn to a topic that kind of drives into this. Many employers have to make workplace decisions on a daily basis about everything. Those decisions have to be fair and unbiased. We still have Title VII that's in effect, our anti-discrimination law.
In fact, you would hope that that decision making in the workplace would promote all things that many of us believe sustain a successful business run by humans with productivity, retention, and morale. These are qualities that are inherently and uniquely human.
I want to ask you a very unfair question right now: How can a machine-trained learning tool incorporate the nuance involved in human decision making? I know that's super unfair, but I want to hear what you think about how that nuance gets incorporated and used in the workplace.
(AM): Sure, and I don't think it's unfair. I would flip it and say it's a question that we all have to ask as counselors to businesses and that businesses have to ask themselves because the fact remains that we're not going back from AI. With the efficiencies and value that it can create, there's such a potential for it that it will be expected to be used in many situations in businesses and required. There's just too much good that it can do.
It's not unfair, and the question for us and for the businesses that we counsel is, can we find a balance between a way to use it to create that efficiency and create that value, but not do it in a way where we miss the value that can be created by that specific human review, interaction, input, and decision making?
As an example, let's take a normal accommodation request that you might get from an employee. You could say, “Wait a minute, why don't we plug that information, all of that relevant information that we get from that ADA certification that our clients use, into an AI model and let AI tell us what we should do in terms of the type of accommodation that should be granted, how long it should be granted for, and what types of conditions we should apply to it.” That's a fair question to ask. Potentially you could have someone say, “Well, the answer is pretty clear too. Plug it in and we should follow that instruction.”
On paper and maybe as a hypothetical, I don't mind that answer but would say what if that individual had just returned from a previous leave or just gave birth to twins or just had a family member who had passed away or some other piece of the puzzle that would be highly relevant to the decision about what accommodation you might provide, but that an AI model would never catch.
I do wonder if, the answer to your original question, is that I don't think that an AI tool at this point can incorporate those nuances or would give you accurate, helpful, maybe even give you lawful, could give you unlawful advice on what to do.
But there is an expectation and at some point, there will be a requirement that maybe you could think about it as what a lower-level sort of administrative processing human would do to start. So you're not relying on it as the end-all be-all and you're not relying on it to give you that specific advice and guidance, for instance, in that accommodation situation, but it at least can give you that initial processing and framework for it. I've gone on too long here, what do you think?
(JR): It's interesting because you can use it as a tool at the base level and then have it guide you. Here's what AI suggests that you should do as an accommodation request, but we need you to speak to the people who know this person. Make sure that you have used as part of the AI tool the human side of the thinking that does capture the nuance.
So I like this idea and it could be workable, but it's important that we not miss the importance of having the on-the-ground manager involved in the decision making, because that person may have that knowledge, that nuance, of those important facts that a machine learning tool is not going to pick up, which also takes you back to why it's so important to have solid manager training and education about how to handle things like accommodations and disability leaves and things like that. Of course, everything else that goes into the workplace doesn't just stop there.
Let's use that as a segue for a moment, because this is something that's very interesting to me and I know to you as well, and that's this concept of bias.
Bias and Trust in Workplace Decision Making
As humans, we know we have built-in biases. Our brains are a complex neural network formed and driven by background, education, and upbringing. All these things influence how we think about things. Of course, all these things also make each of us so unique. How can we trust a computer to eliminate all of these biases in human resources decision making? And frankly, do we even want to do that?
(AM): I've thought about this too because a good part of what we do when we counsel clients on any type of decision making in the workplace is explaining to them that ultimately if there's ever a dispute about that decision, if it ever goes from let's say a decision to let somebody go to negotiation of a package to maybe the person doesn't take the package and says, I've been treated unlawfully or unfairly and they proceed with litigation, that you can boil away all the legalese. What I tell clients is you ultimately have to answer two questions: Why and why now in terms of that decision that you made? If you can't provide a legitimate non-discriminatory or non-retaliatory basis for the why and the why now, you have some difficulty and as you know we have some difficulty in defending that case.
Can using AI help you nail down that answer for yourself? You are plugging the facts into a model beforehand, and you are having it explained to you whether or not you have a basis for that decision, or maybe more relevantly, you have to choose between two individuals.
Let's say it's in the context of a layoff. Okay, well, maybe I could just plug in some information on each, and then I can have AI tell me or have that program help guide me as to which would look like the more legitimate non-discriminatory, non-retaliatory selection. It's helping to eliminate any bias that I otherwise might be incorporating into that decision. For example, I find that person annoying or I've specifically worked with that person, and they've made some mistakes for me while maybe the person who you're selecting them over has made way more mistakes. It was just for another person.
You're incorporating your bias into that decision, and AI could potentially help eliminate it. However, the downside is twofold. One, this gets back to the first point you and I talked about. AI can miss some other key points in that decision. Two, ultimately AI is making that decision based on how you're inputting the information and what you're inputting. You're incorporating maybe unconsciously some of your bias as you are preparing to use that AI tool to eliminate the bias. I know that sounds convoluted, but that's how I view it. What's your take on it?
(JR): It doesn't sound convoluted at all because it’s about whether you are swapping bias by machine for bias by human. I don't know that there's a way that you can actually eliminate all bias because there's always going to be a differing point of view about whether this person was the right person to be selected or whether the accommodations granted were the ones that were appropriate under the circumstances.
To me, there's always going to be room for dispute because human decision-making leads to differing opinions about whether that decision making is correct or not. I don't see how you can remove that from AI decision making as well.
Key Takeaways and Future Considerations
(AM): I'll note this because that's a great point and I also don't want to take us too far off-topic, but you raise another good point, which is if that's the case, then if you're an employee at a company and you are informed or have an inkling that some decisions are being made based on AI programming, would you start changing some of your behavior? As a human, instead of focusing on doing your job or doing it as you think it should be done correctly, you might start thinking about how the program and that data in terms of your output might be inputted into that program and how that may impact decisions regarding bonuses, promotions, or separation decisions. It's fascinating but frightening. You can tell me your view on it.
(JR): Yes, because you're turning a human manager, into an AI manager and conforming your behavior and conduct and work product in order to achieve what you think are the goals that are being sought, whether it's by machine or by human. Whoa, we're getting very deep now.
(AM): We sure are. It's very philosophical for this podcast, but it's great.
(JR): These things, like I said at the outset, are really interesting. They're frightening at some level, but also, how do we harness this to make businesses more productive and decision making more productive? When you just mentioned having AI make the decision, for example, in the context of a rift, I could think of plenty of managers who would prefer to not be put in that position. Most people don't want to make difficult decisions. And there's a reason they don't want to make those difficult decisions.
That's because the employment relationship is really a human relationship at its core. Those of you who have followed my podcast know that this is a frequent theme, one that I absolutely think is true. The employment relationship is a human relationship and that's a relationship built on trust. So, this is a recurring theme, and I keep raising it because, to me, the employment relationship is a very important relationship in most people's lives, not everyone, but most of them. This is a relationship founded on trust and communication. I'm going to throw this in: mutual respect. This is something that fosters an excellent relationship and one that we really ought to strive for in the employment context.
Andrew, how do you put your trust in a machine? You do so every day when you operate your navigation system in your vehicle. You punch in your coordinates and your vehicle tells you where to go, and that's a machine you trust. So how do you do that?
(AM): I will note that it’s amazing how when these programs come out, whether it's Waze or a rear-view camera when it first comes out and you say, “I'm going to stick with what I do. I'm really comfortable. I like the map. I'm always going to turn around when I back up. I don't need this technology.” But then a week? A month? Maybe a year later, we are wholly and entirely dependent on that technology because 99.9% it does make our life a little easier.
It's difficult, especially in the workplace where so much of the value that is created is by humans. So much of that value is facilitated and supported by the human connection and the human interaction. I can see it becoming a difficult proposition when an employee perceives that their productivity at work, their value at work, and their performance are going to be determined by an AI program rather than by humans.
One example that I'll leave you with, is that an AI program will just look at the hours that you've worked or the units that you've sold. For instance, if your manager had to go on leave or had to leave work early for a family reason you would normally cover that for them. It's a nice thing to do. It's the right thing to do. It helps them. It also helps you. But your manager would know that you did that.
If your productivity value and performance are only going to be calculated pursuant to an AI program, there won’t be any accounting for those circumstances. It's will just be the hours worked or the units sold and that would be that straightforward data. Does that impact moving forward the employee’s ability and or motivation to create value moving forward for the company? I've asked you more questions than I've answered for you, but it's all very interesting.
(JR): Well, this entire topic doesn't lend itself to a brief conversation, but the purpose of our podcast today was to discuss some of the issues to get people thinking about these things as they go about their daily business. Whether they're being managed or whether they're acting as a manager, it really does add value to how we think about our employment relationships and our trust in the workplace. Developing those trusting relationships is a foundation for successful employment relationships.
Once again, I'm Jen Rubin. Thank you, Drew Matzkin. Thank you to those who have tuned into our Practical Policies podcast. Visit us at Mintz.com if you would like to access more of our content and commentary.

Mintz on Air: Practical Policies - Bridging the State and Federal DEI Chasm
March 11, 2025| Podcast|

Mintz on Air: Practical Policies - Performance Evaluations: From Stellar to Subpar and Everything In Between
February 25, 2025| Podcast|
