AI & Cybersecurity Best Practices for Non-Profits
- Published
- Jan 21, 2026
- Topics
- Share
This webinar, designed specifically for nonprofits, discussed optimal strategies of selection, usage and approaches to safeguard data and trust.
Transcript
Candice Meth:Great, thank you so much, Bella. And so my name is Candice Meth and I'm the national leader for not-for-profit services for EisnerAmper, but I also wear a second hat. I am also the partner in charge of solutions for our clients. And what we mean by that is we want to make sure that we are always adding value able to address all of our client's needs rather than just working on the traditional attest and tax services. And with that in mind, I can't think of a more pressing need than talking about cybersecurity and ai. And so I'm delighted to be joined by Rahul today.
Rahul Mahna:Thanks, Candice. Good morning and good afternoon everyone. My name is Rahul Mana and I'm the National Practice Leader for Outsourced IT services. We're focused in managed IT cybersecurity and AI implementations right now is our core focus. A lot of our clients in the firm are nonprofits, and so when Candice said Let's do a webinar together, I got very excited. We have a lot of interesting content to get through, so both of us might turn up our New York speed a little bit here to get through all of this content, but we hope you take away a lot of good information.
Candice Meth:Perfect. So we're going to take a look at what we'll be covering today, which includes the nonprofit risk landscape, best practices, AI, adoption and ethics. And then we'll also be taking your questions as well. So please do submit your questions as we move along. Rahul, my first question to you is why are nonprofits such an attractive target for cybersecurity criminals?
Rahul Mahna:Yeah, it's really interesting, Candice, as you and I have worked on many clients together in the firm, and as we talk to a lot of nonprofits, I also take a step back and look at the bad guys. Traditionally, we thought of hackers and hacking folks as people with hoodies sitting in a basement and just trying to do bad things. But now they're much more sophisticated. They've realized just like every other professional, their time is worth money. And so where do you go to get the best return on your time? And so as a hacker, they look for data and they look for people that have a lot of assets and money. And Candice, as you know, nonprofits usually have a lot of both. And so a quick statistic was about a trillion dollars a year are raised through donations, and that's a really juicy target for the bad guys and they like to go after this type of profile. And so that's really one of the reasons I think that the nonprofit target market has dramatically increased in the last, I would say, five to seven years from what we are seeing in the practice.
Candice Meth:And then this is really interesting in terms of the type of cyber attacks that are being experienced by the nonprofits, what do you think is the single biggest cybersecurity risk nonprofits are facing today?
Rahul Mahna:When we talk to our nonprofits and Candice, you probably can comment a little bit about this is the priorities and where priorities are. And there's a lot, and I know you work through this with a lot of our clients. The one thing that I keep seeing is staff. And so when you look at these areas of where they are typically penetrated and where the experience of that hacking happens, if you see the first two which are phishing and email compromise, it's usually related to human error. That's really where you're seeing that. So staff, I would say continues to be the focus of the hackers, the bad guys trying to come up with creative ideas. And so why do you think Candice, actually I'm going to ask you why do you think staff are such a focus of the bad guys specifically in the nonprofits?
Candice Meth:I think it's exactly what you said, which is the bad actors have gotten a lot better in terms of the messaging. So the emails are looking real. The sender really does look like somebody you work with. We've seen examples where you will get a voicemail and the voice has actually been pieced together through technology. And so it really does sound like your CFO or your executive director. And so I think using staff as the entry point to perpetrate a fraud is unfortunately easy way in.
Rahul Mahna:That's exactly right.
Candice Meth:And I want to touch just quickly on ransomware, which is at the bottom of this list, but unhappily, we are seeing more of our clients be targeted for that type of attack. And so what I always say is the best offense is a good defense. And so I think it's important as our clients are thinking through some of these best practices that you have an approach, should you be targeted for ransomware? Are you going to pay the ransom? Is it essentially against your ethos to give in to that and therefore you absolutely are not going to pay the ransom? And if you've made that decision in advance, what's at risk? How many weeks or months of data do you think you might lose? Are you testing your recovery processes, your backup processes so that you know what you might be at risk for? And ultimately how quickly can you get back up and running if you're suddenly locked out of all of your systems tomorrow? That's a scary thought to have and a scary conversation, but I think being prepared for that type of scenario is really important.
Rahul Mahna:Yeah, I agree.
Candice Meth:This is very, very interesting in terms of what's actually being targeted.
Rahul Mahna:It goes back to the mentality of the bad actors. Again, not sitting in their mom's basements trying to do nefarious activities. They're sitting, and I've seen this personally, they're sitting in class A office space, going to work with a briefcase every day dressed in business attire, and they're out there looking for where they get the best return on their money. And if you go back to nonprofits and where would they get the best return for their time and money and efforts they spend, boy, nonprofits collect a lot of PII and there's a lot of information that's readily available that has really good financial implications if they could get their hand on it. And so the data, I believe nonprofits don't realize the data that they have is so enticing to bad actors and how they keep it and collect it. That's a challenge. And I know in Candice, we've talked a lot about some of your internal audits and things that you do with nonprofits. And I know a lot of focus is where's our security? Where's that data, the financial, the human data, and how do you protect it?
Candice Meth:And so with that in mind, how should nonprofits prepare for an incident response scenario? What's the best way to be prepared for that?
Rahul Mahna:I think you touched upon it a little bit as well, is just to think about it right in advance and think about it's not if it's when we get a cyber attack. So if when we get a cyber attack, how do you have your plan in mind? I can't tell you how many times we encounter this, that on Friday everybody goes home happy and on Monday morning the world has collapsed and money is missing. Or there's some other kind of even worse risk, reputational risk damage that has been done over the weekend. And that's a whole area that you can't quantify is what happens to a nonprofit's risk of their reputation. And so I think the best thing to do, we call it incident response plan, which is prepare for that incident, get all your stakeholders together. It could be a marketing firm that you use, your legal team, your auditors, your accounting team, everybody should be on the table ready to know what to do when it happens.
Candice Meth:Great. And so we've reached our first poll, I should say our second poll. Excuse me. What area typically poses the greatest cybersecurity risk for nonprofits?
Rahul Mahna:While we're waiting for that, Candice, I have a question for you. As we talked about incident response, do you feel your nonprofits ask you to be involved with planning?
Candice Meth:I would say 50 50. We do have a fair number of clients that as part of the audit committee meeting, they include as a standard agenda item talking about their response policy and the results of testing it, et cetera. And then we have other clients that don't necessarily make it that a standing item. I think unhappily given the uptick in breaches, we're starting to see that be added as a standing agenda item more and more.
Rahul Mahna:Interesting. Very interesting.
Candice Meth:Great. So I'm going to go ahead and move on. You do have just a couple more seconds to respond if you haven't already. Alright. All right. So it seems like people agree with us in terms of human error being the greatest risk. We're going to move on to some fundamentals. And so here, there's a lot of acronyms on here. Rahul, if you don't mind, could you kind of take us through the various frameworks?
Rahul Mahna:It is a little daunting when you look at these, but again, we want to give a lot of information today and hopefully just tickle a few ideas for our viewers. So when you look at risk, how do you measure and manage your risk? That's the most important thing we talk about when we work with our nonprofits and really almost any organization. But you asked a good question earlier, Candice, which is how should an organization prepare? And I cannot think of a better way to prepare than to do an IT risk assessment. And so when you're supposed to go every year for your health checkup, same thing for it in an organization, you should go every year and get an IT health checkup because the IT industry is changing every day. You hear about ai, which we'll talk about later every day in the news, it's changing.
It's an evolving, the bad actors are evolving every day, so checking your IT systems once you cannot set it and forget it. You have to have some type of framework that is being adjusted and adopted every year to signify and recognize these changes that are happening in the world of it. And so here are some framework standards. We only put four, there's some others. But the ones that I typically see most often with nonprofits, and I'll start with the top is NIST. It was a national standard created by the government and some entities about how you should have a standard it, let's call it policies and procedures and internal controls in place. It's very robust, it's very wide, very deep and covers your entire organization. So it is quite an undertaking when a nonprofit goes to do a NIST framework analysis. One step below that is CIS analysis, and this was created by the Center for Internet Security and it's focused in three groups.
And I like this a lot because it is fully comprehensive, but you can pick and choose the group. You can start with what is the most appropriate. So the first group is very focused on cybersecurity, and so we like to start there with a lot of our nonprofits because it's finite, it's scoped, and you can grow from there into the subsequent groups that give you a more total look, but it's a very comprehensive approach to managing your risk. The next two I wanted to list, just because people hear these a lot, I wanted to just talk about them. PCI typically when an organization has a lot of credit card activity, and I know a lot of our nonprofits accept credit cards for donation purposes, for membership purposes and so forth, they often have to get a PCI analysis done. And what that is how safe is your credit card processing?
So it's a very focused evaluation assessment. Depending on how much of processing you're doing, there's different levels of PCI. And so that's something very focused that one can think about depending on how much they're doing credit cards, HIPPA. HIPAA you might've heard or might not have heard. It's very focused on healthcare. It's got a lot of the standards from NIST and CIS, but it is extremely targeted on PII. And so a lot of our nonprofits that we deal with, such as hospitals and so forth, are healthcare oriented. And so they focus on the HIPAA standard as their standard to go with, but they still do a NIST and possibly assist because they still want to get some of that total exposure of the organizational risk. And only a NIST or assist really gets you there. HIPAA is very, very focused on data, and so it's a great, great framework, but healthcare focus, if you're not in healthcare, I would say the first two are a great starting point. That was a lot.
Candice Meth:Perfect. So I know it's hard to pick your favorite, but in terms of if a nonprofit has to adopt a particular framework, which direction would you steer them in?
Rahul Mahna:Yeah, I really like the cis because it's finite. It starts with about 20 controls and you can work through those 20 controls and it's a great way to have a starting point to go forward with.
Candice Meth:Perfect. All right. We're moving into some best practices now.
Rahul Mahna:So it's important to have a plan. And I know we're focused on right now the background in cyber, but we will get to ai. So I don't want anyone to think we're not getting there. We're just getting there in a little bit of time and setting our basis. And so cyber, how do you have an organization be secure? And so I know Candice, we've had this a few times where organizations have come to us that wanted again, that kind of checkup as to where should they be secure, how do we do it? And so I would say we listed these in an interesting order, but the first is that IT risk. We just spoke extensively about that. That's the annual health checkup. I strongly recommend you get a third party to do it. You as you know Candice. And so the IT department cannot audit themselves for a health checkup.
So a third party is a much better way to get that risk assessment done. And it gives a lot of comfort that way as well. We mentioned that human behavior is the most important thing to train to prevent bad actors to coming in. So we really like extensive and cohesive employee training strategy that's happening ongoing, again, not a one and done because the environment keeps changing. So that's really important. You brought up incident response planning. Boy, I just think this is going up and up the ladder. It used to be something people didn't talk about, but now I think as you pointed out, it's just really important, especially with deep faking that's happening because of ai. And so for those that don't know what that means, you can impersonate someone's voice. As you mentioned earlier, Candice, you can impersonate someone's body by switching the face of a totally different body with your CEO's face or a board member's face.
And so this is becoming so easy to do. You really have to have that plan in place as to what do you do when it happens and how do you protect your reputation when it happens. And the interesting one I wanted to add that people might not think about is cash disbursement audits. And so nonprofits I found in particular have staff that have been there for many years. They have processes that have been there for many years and they've kind of forgotten who has access to the bank accounts, who has access to the post office boxes, who has access to the credit card machines and so forth. And all of that leads to how does cash actually go out of the organization and Candice this area better. You could probably comment that on much better. But I think if you look at how does the cash move, you get a much better secure sense about how to protect the cash. And I'm going to let you actually talk about that Candice as well as the nine nineties. I know we had a good conversation about that.
Candice Meth:So a lot of what we're seeing in terms of unfortunately fraudulent activity with banking is our recommendation is go back to the tried and true methods. The bad actors are sending emails or they're leaving voicemails, and the voicemail will sound like the person that you work with. And so rather than respond to those in an email, pick up the phone and call your CFO, call your executive director in real time and double check that they actually want you to take the action or get up from your desk, walk over and have the conversation. We've seen clients actually test their staff giving them four different sound bites. One clip is the actual executive director and the other three are fake that were created with ai. And most employees could not pick the right one. And so unfortunately the schemes are getting that good. And so go back to picking up the phone and calling.
And the same thing with instructing your bank to do the same. Make sure that the bank is given specific instructions that they do not process wire requests without verbal confirmation. The nine nineties, I think this is a very timely conversation. You may recall that some of the states require a wet signature on tax forms. So this is prior to the days of some of them moving to an electronic submission process. And a lot of the states have a free searchable database where you can pull up old nine nineties. And so unfortunately we saw an instance where by virtue of older nine nineties being publicly available and searchable on some of these websites and having someone's actual signature, that real signature was lifted. A bad actor went to a reputable bank, opened up a loan using that fake signature, and then used the business address and the business phone number for the loan documents. And that loan was processed. And it was a very expensive endeavor to be able to prove to the bank that in fact, that was a stolen identity. And so we always think of PI as birth dates, social security numbers, but literally your signature now is something that could be stolen and utilized in a nefarious way. And so again, these schemes are evolving.
Rahul Mahna:They really, really are. And it leads us to the last point, which is ai. It's really AI that's making a lot of this evolve and making things easily searchable and replicated and duplicated, and it's a free tool out there. People think it's easy, it's free. I'm a nonprofit, I could use this, but I think in the next handful of slides we're going to talk about the risk of what it does.
Candice Meth:Perfect. And so before we move on from here, Rahul, I just have to ask you, in terms of training, how often should nonprofits conduct cybersecurity training?
Rahul Mahna:That's a really good question. And so I will tell you, when we work with our nonprofits, typically, let's say it's phishing training, they asked to do it quarterly. And I think it has changed. I think years ago people said once a year is good enough, and then I think it's just evolved and then it became semi-annually. And now we like to suggest, and our nonprofits are very open to this is quarterly because the once a year definitely doesn't give you the latest and greatest as to happening, and quarterly allows a little bit more repetition. So I think people start thinking about it a bit more. It's obviously not the same education or training that every quarter, but it's different. And so we like to do a little bit more frequency on the training, the testing, and then on the training, we're big believers in micro learning, so I'll call them learning snack. So instead of asking employee, Hey, we need you to watch one hour of training, wait, nobody wants to do that, it's so difficult. But if you can get into a program and they're out there where you get learning snacks, two minutes, one minute, three minutes, those seem to be far more effective in getting your team to digest it, understand it, and appreciate that there's significance in the organization's IT risk structure. And so that would be my thought around that.
Candice Meth:And just a quick, great example. So obviously when the phishing emails go out, they're made to look attractive. They include things that make you want to click on it. And so we know of one example where an email went out and it said, we've got great news. You worked so hard this year, we're going to give you an extra three days of vacation. And most of the employees clicked on that to learn more about their extra three days. And so while that might seem cruel and all the staff ended up in remediation courses, it's important unfortunately, to think through because the emails that you will receive will be as equally attractive as something like that.
Rahul Mahna:Absolutely.
Candice Meth:Okay, so we're at poll number three. Which frameworks are comprehensive evaluations of IT risk? And I believe here you can select more than one, right, Rahul?
Rahul Mahna:Yep. Yep. You should be able to. And hopefully that's the case. Candice, when you're doing your audit committee meetings, you talked earlier about including more cyber and audits. And I know we've worked together on a few clients our are the board level, the C-suite, are they asking for a framework because there's not really a compliance standard that they have to meet in finance. You have to meet a standard and you take your framework to meet the standard. How are your clients thinking about their risk?
Candice Meth:So I think it goes to basic blocking and tackling. So our audit committees are asking us, how is this tested? What are our vulnerabilities? How does this interplay with the audit? And then on the flip side, they're taking a look at what's our insurance. Again, we go back to on happily. It may not be a if, but a when situation. And so if we are breached, when we are breached, do we have the right insurance in place to protect us? And so we're having both of those conversations.
Rahul Mahna:Yeah, that makes sense. I'm glad to hear that they're doing that. Now, as you pointed out really nicely to me, there's just a lot of, oh, we got some answers already. There's a lot of conflict, there's a lot of things a nonprofit has to do, and how do you pay attention to all of these things at the same time with the same waiting? And it's hard. It's very hard. Exactly.
Candice Meth:Yep. Okay. Alright. So it looks like at 81% most people selected NIST.
Rahul Mahna:Yeah.
Candice Meth:Alright, so I know that we've been talking about a lot of scary scenarios, but I'm really excited to talk about AI and talk about the opportunities that AI present to the nonprofit community.
Rahul Mahna:Absolutely. AI is, it's a large bucket, and again, I just want to remind everybody, I'm assuming questions we'll start from now forward. So we do have that q and a box if you need it. We can talk for so much. Probably Candice and I could do this webinar over three hours. We have to do this in one hour. So I know we're just going to talk to some high level points. And in the context of AI today, we're going to really just focus on these core areas where AI is helping, making predictions, how AI is doing repetitive tasks through automation, how it's recognizing patterns in an organization. And so we're going to somewhat limit the scope. We're going to focus a little bit on education because we saw a lot of people from the education vertical are here with us today. And so we'll try to scope it a little bit to that extent. And maybe Candice, the next slide will give a little better view of that. Sure.
Okay. So these are the three core areas when we think of AI and nonprofits, generative, predictive and automation. And below each, you can see some examples of how we see our nonprofit clients using them. In practical sense, you really need to have a plan. And then once you have a plan, you need to have a platform. This is something that we're going to talk about a little bit more as we slides go on, but it's really important as with anything in life to make a plan, pick the right partners or softwares that you want that are secure. Because the last thing we want back to our earlier slides is data leakage. We don't want to have data leaving the organization. And that's the real biggest concern that I think both Candice and I have that we want to convey is that you don't want that to happen and how do you mitigate it the best that you can?
Candice Meth:And so I always think of this example when it comes to ai, and I just want to share a personal story. So I wanted to travel from Port Washington and Long Island to New Rochelle and Westchester, and I thought I'd be smart. I said, rather than use Waze, which is what everybody uses, I'm going to use AI and I'm going to ask AI what the best way to get there is. What's the fastest way to get there? And so AI gave me the answer, and the answer was basically to take a boat and go across the water. And so rather than get frustrated with the ai, I have to think about my prompts to it. I didn't say, what's the fastest way to drive there? I said, what's the fastest way to get there? And then unhappily, of course, I am not going to be buying a speedboat at any point, but it answered the question that I asked.
And so as we kind of think about how we might leverage the opportunities of ai, I think part of what we need to do is avoid the quick frustration that we might have and actually think to ourselves. Whereas in this instance, it didn't necessarily give me the answer that I wanted. I want the AI to give me all possibilities, right? So if I say, going forward, no, I only want you to tell me how to drive there. Perhaps going forward when I ask other questions, it won't give me the entire realm of possibilities. And so I think really important when you're interacting with AI is to think about the prompts you're using and how you want it to respond to you. With that in mind,
Rahul Mahna:Go ahead. I'm sorry, Candice.
Candice Meth:Well, I was just actually going to ask you a quick question in terms of whether or not you think AI can help nonprofits overcome staffing shortages.
Rahul Mahna:I do. And I will add to this chart right here that ai, when we talk to our nonprofits and we talk to the C-suite, there's a mandate coming from the board level that you need to use ai and how do you use it? I think you illustrated really well, Kim, is to be thoughtful. But I also want to say a lot of nonprofits should give themselves credit. They've been using a version of AI already. And so a lot of our nonprofit clients, I'm sorry, have been using MailChimp, which is an email automation tool for years. And MailChimp essentially is a version of ai. And you send out your emails that way and you communicate with your donors and your constituents. Social media, a lot of our nonprofits have been using different social media tools out there that auto post things for them. So they have regular communication with their constituents.
And so some of those ideas are already in process. Now I think that what you have to do is figure out how to continue it and make it a little bit more enhanced in where you're going and how you're going to do it. But again, I'm going to focus in on for those that are older here in 2099, we had the.com craze, and I know some people were probably born there, but for those of us that actually lived through that, there were search engines. And using your example earlier, Candice, about how do you search build? There was Ask Jeeves, there was Yahoo. And today those people don't even know they exist anymore. And the reason is this, it will evolve and it is going to go, but the thing you don't want to do is pick a platform that has risk that it will shut down, and then all your data is going to be lost.
That's the really important thing. I want to again mention over and over again, be very thoughtful about the platforms you use, how you use them, what data you use in them. So to answer your question, Ken, is that was really long drawn out. How do we get more resumes? How do we use different tools? AI is already hitting the resume market. So there's tools out there will auto let you post your resume if you're looking for a job to multiple sites at one time. And then there's tools from the employer side that is counteracting what the AI bots are doing and trying to pick and choose which are the right resumes. And so for example, I have a client who's in the nonprofit space in healthcare that has to hire 4,000 employees a year on average. And so they're inundated every day with intake of resumes.
And what we are trying to do is trying to build logic that goes through those resumes, picks and chooses and learns through the client's experience, what they find most relevant goes back and talks to the candidate and says, Hey, we saw you did one, two or three, did you do four and five and makes the candidate interact with the AI tool, comes back to four and five, and then the recruiters can much more succinctly see the core data points that they would normally use and be able to either discard or hire in a much more rapid basis. And so this client is going from about 25 recruiters and trying to close that gap a little as to how many recruiters to have AI help use the data to give the existing recruiters better data to make those selections. So that's just one example of how we're seeing it used.
Candice Meth:That's excellent. So now I think we should just touch on again, some of the main uses here and how it can help with cybersecurity.
Rahul Mahna:I think we're trying to tie it back together now, right? We spent half hour talking about cybersecurity for a reason. The reason is AI is a new tool in the toolbox of organizations. And I think I wanted to emphasize here is that I think our nonprofits in particular, because they don't have a lot of compliance or governance standards are more open to using free tools than our corporate clients. And when they use those free tools, they're not thinking about the risk that's being introduced to the organization. And I'll give you some examples. One example is we had a nonprofit that was, their HR department was using AI to redraft emails, but they were using a free email, a free AI tool, and they were putting in the employee's name, the action that happened with the employee, the remediation path and the improvement plan, and not realizing that we're leaking all of this sensitive information into this free AI tool that the AI tool is just going to keep forever.
And so it's really important to think through that when we put our employee information, our donor information, any kind of healthcare information, education information into these tools, unless they're secured, unless they're private to your organization, it is open game. And these tools can use them to learn. Remember, artificial intelligence is intelligence. How do these tools get intelligence? We as the world are feeding them right now. We're giving them how we think. That's why they're called large learning models. They're just learning everything we're giving them for free essentially and teaching them how to become intelligent. But when you give away everything, you have to make sure that it's kept in the organization data structure not available to anybody to learn and listen and hack and get that information out. And so the bad actors want it. They know that people are using free tools. They know some of these tools will fall, some of them will get hacked, some of them will not get funded, and they'll go grab all that data that people have been putting in there for years to come. So something to think about a bit.
Candice Meth:So I think as we move forward with ai, the validation of the results you're getting is going to be critically important. And so I had a client reach out to me and ask whether something is required under the IRS code. And so I was searching the code and I didn't see anything talking about a requirement to do something. And so just to test the ai, I asked AI the question and it came back with the answer, yes, this is required. And I said, that's interesting. Let me see the citation. And the citation was someone's personal website where essentially in blog format, they made the statement that something is required, but there was actually nothing in the IRS code that actually pointed to it being required. So really important not only to take a look at the results, but also it does tell you where it's pulling those results from and take a look at the sourcing of it.
Rahul Mahna:That's a great point. And again, AI tools are learning from Google. They are going to Google and reading those documents. So if you have somebody posting false information and a Google listing on a message board in Reddit, they're learning from the false information. And that's a really good point, Candice, that you can try to become your virtual doctor, but you don't know is it real data or bad data that you're pulling in. And it's really important to check with your professionals. And I know people are going to ask about where should policies come from? How do we validate? Do we have to tell people? I think the world is moving so fast, you need to ask your accountant, your auditor, your IT professional or your legal team, what is a benchmark to use for some of these adoption principles? Because frankly, nobody really knows until a professional is only doing it.
Candice Meth:And you raise a good point. In fact, and we have a question related to this as well, but when we're talking about ethical considerations, what should nonprofits keep in mind when using ai? So for example, you gave the note about an organization using AI to collect resumes. Is there a responsibility on the part of the nonprofit to disclose the fact that it's leveraging AI in terms of filling a staffing need?
Rahul Mahna:Again, I would defer to legal counsel because I don't have the legal bias to know. However, if I just use a general principle of thought, I don't think there's anything wrong using ai. People are using it, as I said, from simple email systems to reviewing systems to writing your letters. And so I like the concept of transparency. It's something that's always been an ethos of nonprofits. And I think transparency with your donors, with your board, with your reviewers is always best. Again, nobody's doing anything wrong if you're using a tool to filter 500 resumes down, there's nothing wrong with that. And it's probably been done in different versions over the years anyways, but now we just call it ai. But think of universities. Universities, I don't know the exact numbers, but I remember reading about Harvard and some of the other Ivys. They get thousands of resume applications a year and they have to whittle it down to 200 people. How do you do that funnel without the use of some type of structured process?
Candice Meth:Perfect. And so let's talk about some best practices.
Rahul Mahna:I feel like I keep repeating myself, Candice, over and over, and I don't want to, but data, bad actors want data. And so I think you think about the strategy of how do you protect your data and build a structure around it. And that's usually what I like is strategy before structure. And so collect only the amount of data you need. We used to see our nonprofits try to collect everything, and I'm joking, but their blood type, and you don't need that for a donor. You need just what you need and think about what's your strategy, what's your purpose, and only collect that data that is really relevant. And think about that again, when you and I talked about the nine nineties and those wet signatures, boy that really had me thinking about this data is out there, my personal signatures out there, do I want my personal signature out there?
That's something you can control today. And I know Candice, you and I talked about like should we do it? How do you do it? Are some states are changing their policies, I guess on the nine nineties as you said. And so it's all about data and privacy and how do you comply with that? And again, what's the organization's mission and values and do they believe in transparency? Should they make a statement on their website? Should they make a statement in teams? And if you're doing auto recording and you're doing transcription services, I'm seeing this more and more where clients are asking and putting a statement in their teams meetings that say, Hey, you have to ask us for approval before you allow Zoom or teams to do auto transcription services. We as a firm, as EisnerAmper just instituted a policy also around that type of ai. And so it's all back to data. Where does your data go and how do you think about that?
Candice Meth:And can you give me an example? What's a quick win that nonprofits implement today to have better cybersecurity?
Rahul Mahna:When I think about our nonprofits, I would say that vertical market from our experience has been slowest to adopt basic cybersecurity principles. I don't know why. It's just been a slow process. And as you said earlier, there's a lot of competing efforts going on. And so how do you manage all these competing efforts as a nonprofit? But there's very simple tools. There's a tool that is a password manager. I use one called LastPass personally, there's many others out there and it's free. And you could use the paid version, which is I think $2 a month. So it's not a big financial exercise, but it allows you to do some really great things. So I have about, I think 500 websites I go to roughly, and I now have a unique password for 500 websites. So I know the majority of us, in the old days, I had five passwords that I used for 500 websites. Now I have 500 passwords. And so I think multifactor authentication is another one where we still see nonprofits resisting using a multifactor authentication strategy. So what does that mean? It's a auto text message, the CFO. It is also in my mind what you illustrated earlier, a voice recognition of a wire transfer that's adding multifactors to a process to allow cash to leave the organization or to be more secure. And so I think those are a couple examples of very easy low hanging fruits that are not very costly that you can do today.
Candice Meth:I'm so glad you brought up multifactor authentication because earlier I touched upon taking a look at your insurance and making sure that you are adequately covered. A lot of times the policies have certain things embedded that say you must have multifactor authentication. If you do not, the insurance company is not required to indemnify you for a loss. And so I would really encourage everyone today to take a look at your policies and make sure that the things that they say are a bare minimum in terms of security that you have those implemented today. And with that, we are going to move to the fourth polling question, which is how likely are you to invest in AI driven automation within the next 12 months? And while people are responding to this, a quick question for you, Rahul. What's your advice for nonprofits who are worried about data privacy with AI tools?
Rahul Mahna:My best advice is to be worried about data privacy tools. On the corporate side, I can share a story that some of our corporate clients are asking our team to do a risk assessment of a tool before they implement it now in the organization. And so they're worried about their data and leakage using some of these tools before they get started. Back to the story about Alta Vista and Yahoo. They don't want to have their data in something that stops working a year from now. And then how do you get access to your data and all those issues? So I would say that the really thoughtful thing to do would be to, if you're implementing AI, is to find reputable companies. Let's take Microsoft for example, and they want to use ai. Look at copilot, it's added. It's part of a very large ecosystem that is private and Microsoft spends a lot of time on that. Talk to your IT professionals about what version of Microsoft that you should have and be secure. I would be hesitant as a nonprofit to start using some newer tools. There's always the latest and greatest, but your main vendors are all running here, Razor's edge type people, black wall, they're all trying to figure out how do we get heightened donor engagement? How do we manage our constituents better using AI tools? And so work with your established well-known vendors for right now is my best advice to protect the data.
Candice Meth:Perfect. And a couple of times we've used the term LLM, which is large language model. As we're thinking about that particular type of technology, is there a difference between the free and the paid LLM in terms of integrity? In terms of security,
Rahul Mahna:I just was working with a nonprofit that had standardized on the Google stack and through my reading, again, everyone can interpret it themselves. The free version in Google clearly stated that they are allowed to use the data that you put in their LLM, the paid version, which they call, I believe Google Workspace. And again, everyone can interpret this differently. I'm not to say that I know everything in this regard, but what I read, the workspace version said that they would keep your data private. And so that was one example that we were talking through together with the nonprofit to say, what is the better route to go and what is the data you're going to use? Again, what's the strategy if you're just uploading some banal content, maybe you don't care if it's sensitive content. Maybe we should think about that a little bit more. And so that would be my thought.
Candice Meth:Excellent. Thanks so much. And so we've got the results of poll four here and very interesting to see sort of where people are in their journey towards AI driven automation. So we're going to spend the last part of our time today talking about AI policy. And so we've been asked numerous times if we have a sample policy that we can share with our nonprofit clients. And the problem with that is that the sector itself is so diverse, but there really is no one size fits all. And so what we thought we would do is provide an example of some things to consider and essentially a sample policy and talk about how this might impact your organization's discussion as you work to design your own policy.
And so the leader of our education group recently gave a presentation on AI for individuals that are employed at colleges and universities. And so the next two slides are just a snippet of that presentation, which was outstanding. And so I wanted to share this example to really hone in on the fact that you have to craft it specific for your organization. And so the opportunities and the risks with respect to AI for an educational institution are different than those of let's say a social services organization. And so really important to kind of think through what's applicable for your organization and what type of usage of AI is acceptable, what would present a risk, and then of course, obviously adopt a policy that responds to all of that. Rahul, anything to add here?
Rahul Mahna:I think this is a great example of a vertical specifically education, and we have some wonderful partners in the group, as you said, that focus in on this. And I think the next slide is really interesting that we'll share to talk a little bit more about this.
Candice Meth:Perfect. Yeah, so just a couple of examples here. Rahul, I'm sorry, I didn't mean to cut you off there.
Rahul Mahna:Not at all. Not at all. Please go ahead if you'd like.
Candice Meth:Yeah, so here we're looking at different things that might be applicable for a higher education college or university. And so leveraging AI for scouting, for example, or using predictive analytics for some of the athletic departments, instituting policies with respect to course materials grading, making sure that the papers that are being submitted are authentic, using it for grading and research or perhaps using it as Rahul touched on earlier for the scoring of applicants to make sure that you're getting a really robust class of students that represent the very best in the country and in the world. And so there's a lot of different ways, ways that AI presents opportunities, but also you have to think through what some of the risks could be and then obviously work to craft a policy that speaks to that.
Rahul Mahna:Yeah, I agree, Candice, specifically in education, I have a lot of clients and also friends that are in the education market and whether they're professors or they're part of a different aspect, but I feel like this market is really trying to evolve right now and trying to figure out how to handle AI and technology. I think there's some good to it, but I also think there's some bad to it as there's some shadow work. I think I was watching TV during the football games over the last week or so, and I saw a commercial for an AI tool that basically said, students, you don't have to read. Just use our AI tool to summarize everything that you do and you're more efficient during your day. And so I don't know if that's necessarily a good thing that students stop reading and just read the Cliff Note books, if you guys remember the yellow Cliff note books to summarize an entire book for you.
I mean, that's essentially what AI and what they're saying is just read the cliff notes and you don't read a book. I think professors are challenged with that right now. And of course, using AI to draft your responses to your professor has been going on for a few years now. There's wonderful tools out there where professors are uploading student documents to having another AI tool validate, was it AI that wrote this or did the student really write this? And so you're seeing the classic situation of having good cops and bad cops, and it's happening a bit here as well. I do see a lot of our teachers using creative AI tools though there's a tool. One, the teacher I know is using a tool called Kahoot, which is making really creative experiences for the students, whether it's polls, it's surveys, they're putting their content up in a much more creative way.
They're using AI to generate a lot more, I would say differentiated instruction for that classroom, right? The rote way of learning typically was assembly line, we're going to teach you A and B and C, and by the time you're done D, but it never acknowledged E, F, and G. And so by the differentiated example in using ai, they're able to adjust the classroom's learning to say, Hey, we did A and B, they're too advanced. Let's go straight to F and G now. And so I think there's a really good way to move our world in that direction. There's a wonderful K through eight school, I think it was on 60 minutes if anyone saw it, that is using AI to teach students for two hours a day to really accelerate their learning. So it's pure differentiated instruction where the AI tool adjusts every question of math, for example, to learn where the student's competency is and keep pushing them higher.
So if they're in second grade, why can't they do fifth grade math? And so the tool is pushing them and adjusting to their skill and their needs. And so those are wonderful examples of how AI tools can really help in terms of moving people's education levels up to meet to their interest and to their skill levels. And so I think there's more than just automation and more than just frameworks. I think there's some real positive learning that could come out of this. And in healthcare, there's the Google folks in DeepMind and they are just really focused on using AI to help society. And so they just did a whole protein analysis on certain DNA strands. They're using their AI tools to evolve to figure out better medicines based on proteins that are in our human body. And they're very, very focused on these types of things using these tools that can work 24 hours a day to help society in different areas of healthcare. And so I think we're going to really see a lot of evolution happen in the next few years for human development as well, not just automation and organization improvement. So I went on a little bit of a tangent there, but that is where, and then we have a few minutes left. That's what excites me most today about AI as a technologist is really helping others with these tools.
Candice Meth:Fantastic. So we took the time to include some of the resources and we encourage you to click on these and take a look at the content that helped us put together today's program. And while you're perusing that, we're going to quickly move over to some of the questions that have been submitted. So Rahul, if AI is told to wipe any history of conversing with you, are those interactions completely erased? For example, if we were to look at copilot?
Rahul Mahna:Yeah, I mean these are really good questions. I don't know if I have exact answers, but what I would share is they claim yes. However, in this world, I'm often asked if we have data in our systems, let's just say your files not even ai, and we delete a file as the file deleted. And I always say, well, there's backups aren't there. And people have a very methodical backup process, whether it's daily backups, monthly backups, yearly backups. And so could there be data backups of some of this information you're putting in LLM and thinking that it's erased? I don't have a good answer yet. This is moving so quickly, but I think it's possible. I think it's possible.
Candice Meth:Great. And so we got a great question about for organizations that don't have a chief technology officer, what's the most practical way to establish ownership? So cybersecurity consistently stays on the agenda of leadership, and that's a fantastic question and I'm going to put my auditor hat on for a second. But as part of an audit process, the auditors will do a risk assessment. And under our standards, we're required to communicate to you what we have identified as the top risks. But what I always like to caution our boards about is we're taking a look at the risks that would lead to a material misstatement of the financial statements. And so our identification of significant risks is never meant to replace the conversation of critical risk that takes place at the board level. And so I encourage all of our clients to make us a standing agenda item to do an annual risk assessment.
Now, there are some lawyers that will tell you not to put that in writing, but whether it's captured in the minutes or not, I think it's a really important thing. And in fact, under some state law, the audit committee's charged with doing an annual risk assessment. And as part of that risk assessment, you'll identify for the organization what are the critical risks. Certainly reputational risks will be right up, but part and parcel to that is cybersecurity risk because obviously a breach is what might harm the reputation of the nonprofit. And so I think doing your annual risk assessment, putting cybersecurity as one of those top identified risks will help leadership stay extremely focused on that on an annual basis. With that, we are at time. I do want to say that I know we got some additional questions. I promise we will follow up with you offline, so your questions will get answered. And with that I'm going to turn it over to Bella. Thank you so much.
Transcribed by Rev.com AI
What's on Your Mind?
Start a conversation with the team