Skip to content
a computer circuit board

Cutting-Edge Insights | Exploring Data Privacy's Evolving Landscape

Published
Jun 28, 2023
Share

As the digital footprint and amount of data our organizations process extends, the importance of a strong data privacy program has increased. During this webinar you will learn how to stay up-to-date on the latest trends in data privacy and the impact of emerging technologies like AI. 


Transcript

Paul Douglas:

All right, thank you everyone, and welcome to our webinar today. We're excited to have you all with us to discuss the evolving data privacy landscape. This has certainly been a hot topic for a few years now with all the regulatory changes, but also just with the growing digital footprint we have at our organizations is becoming more and more difficult to really design and operationalize the data privacy programs that we want to have. And that's really where we come in today. We're hoping to have some great conversation around just that. Our firm has really been in the trenches in recent years helping design and operationalize those privacy programs. We're hoping to share some of those lessons learned with you all today. And two, to be a part of the conversation we have Molly Grant, who is a privacy specialist here at EisnerAmper, and excited to also have Ray Soriano, who is a leader within our tech risk practice at EA Digital.

So between the three of us, we hope to really share a lot of great insights around what do some of these data privacy best practices look like? What are some of these evolving trends? Something else I wanted to share for the audience, because I know we have a very, very broad group of individuals on today's webinar who may be viewing this in the future. We work with a lot of different groups when it comes to data privacy. So we believe this, it's a great spot for all three lines of defense, or that's at least how I view it, the traditional three lines.

So this has been a great spot for internal auditors to get involved and really drive that trusted partner and really weigh in on how we're managing and mitigating those privacy risks. We often contract with compliance officers and just directly with chief privacy officers. We've spent a lot of time working with those folks and also we spend a lot of time with IT. With that growing digital footprint, it's requiring more and more configuration on the IT side of things to really help automate a lot of these processes that we'll be talking about and also operationalize those monitoring capabilities. So that's a little intro for us here today, and I will now go over the agenda.

All right, so we'll start by discussing some of those trends in data privacy. I'll pass it over then to Molly, who will walk us through some of the regulatory updates. We have some updates also on artificial intelligence and what are some privacy considerations we should be thinking about for these new and emerging technologies that we're using in our organizations. We'll go through some of the top privacy risk that we're seeing today with the clients we're working with and what we're hearing across industries. What are some of those common challenges, and then we'll bring it all home today by talking about best practices and how we can ultimately mitigate data breaches.

All right, so privacy trends. What we're seeing right now is yes, there's absolutely an increased complexity in regulations. So a little bit about my background. I first jumped into data privacy with the HIPAA privacy rule. I've spent a lot of time within the healthcare sector and privacy at that point in time when HHS was envisioning what does this regulation need to look like, it was very focused on paper. So that paper protected health information. The privacy laws and regulations that you see coming out now, they go so much further and beyond what some of those original US based privacy laws were trying to accomplish. And that therein lies the complexity.

If you're an organization that's faced with complying with a variety of laws that all have different standards, different requirements based on the data set that is in scope, what do you do? Do you create that one central policy? Do you have multiple policies based on the regulatory requirement or based on the type of data? That's a tough or that's an important question you want to answer early on. We see some folks taking what we call a high watermark approach where they're taking all the various privacy regulations they have to comply with, and then they're designing their policies to meet the most stringent requirement and pushing that out everywhere. In some cases, that works well. In some cases it may not be practical.

So this is certainly something that you want to think about when you're designing that program, how centralized it's going to be and the ability to push out these policies across your entire organization. There's this stronger focus on privacy by design. I don't know if this is a concept that's familiar, but this is a term that you're going to see more and more going forth. It's really baking in these privacy considerations as part of your software development processes, as part of your procurement processes and how you're building out your IT environment and how you're structuring your relationships with third parties and any type of data sharing that you have. It's really baking in and designing those privacy principles at the start.

And that's something from a regulatory perspective, it's going to be really important despite our best efforts, things go wrong and to really put ourselves in that best defensible posture, we want to be thinking about privacy by design throughout everything we're doing. The great debate. I think a lot of us are talking with the generative AI that's now becoming available to us and that we're all starting to become consumers and users of. What does that mean from a privacy perspective? Ray and Molly will have a lot of nice thoughts on how is AI impacting us, not just from a usability perspective, but also from a privacy.

And the last trend here to mention around data breaches. We've been very focused historically on security incidents and building out our security operations centers, building out our incident detection and incident response capabilities because we all know that in security incidents are just a matter of if or when not if. So we know security incidents are happening often, whether or not they flip and turn into a breach, that's where some of these newer privacy considerations are coming in. So you could have a security incident at your organization that never becomes a breach and through implementation of some of these privacy practices, so things like data minimization and really controlling where that sensitive information lives, or at least those sensitive identifiers such as a social security number, really limiting where this lives could ultimately reduce or prevent the breach from happening whenever you do have a security incident that's live.

So we're seeing a lot more of a proactive approach there where your privacy officers, your compliance officers, and whoever's tasked with operationalizing your data privacy program, they need to have a strong relationship with the security folks and making sure that everything's tight to really limit the impact of those incidents and keeping the breach impact as low as we can.

So I think with that, we will transition to our next polling question.

Astrid Garcia:

Polling Question #2.

Paul Douglas:

Yeah, so I was going to say, this is one of those where you may be inclined to say true. It's like, yeah, of course I understand, but this really has to become a more complex question. If you're a higher ed institution, for example, you've historically had to deal with FERPA from a privacy perspective, but there's other data sets you have across your campus that may be subject to PCI laws, cardholder data, GLBA for the financial information that you have from a student financial aid perspective, international laws such as GDPR can come into play. I think we're one of the more complex areas with this could be is with these new emerging state laws we have in the US, understanding the applicability of these new state laws can be challenging. It can be tricky based on how the legislation has written out the applicability standards or criteria. So if the answer's false, it's you're not alone, I guess is what I'm saying in short.

Astrid Garcia:

Thank you. I will now be closing the polling question. Please make sure you've submitted your answer. Back to you guys.

Paul Douglas:

Okay. All right. So we're there with you, oftentimes when we're brought in we're, so before we start designing that great privacy program and then later operationalizing it, we have to start with understanding the scope and the applicability. So for the 38% who are in the false category, we're with you. We understand, it's more complicated than ever to understand what are the various requirements that you have to comply with. And for the 61% who answered true, that's awesome. You're in the operational stage. All right, Molly, I will pass it over to you now to talk a little bit more about the regulatory updates.

Molly Grant:

Thank you, Paul. So to discuss privacy's evolving landscape, we wanted to identify certain changes that we are seeing within the United States and some of the states are starting to trend more towards comprehensive privacy laws away from the historical HIPAA, FERPA, GLBA, that was very data set specific. These state laws are trending more towards identifying basic individual rights for people and consumers of organizations. They're identifying key criteria that should be a minimum for organizations to consider when they're collecting data. So as of right now, we currently have 11 states with comprehensive privacy laws, Texas Governor Abbott just signed Texas. So now we do have the 11 states. Several issues that we run into with some of these state laws in regards to privacy with something that Paul touched on is clients of ours are having difficulty, it's referred to as a patchwork, basically trying to identify some of the similarities that some of these state laws requirements have and identifying what differences some of them introduce as well, super complex and kind of difficult to manage in some cases.

What we like to recommend to most of our clients is finding those similarities first. And like Paul mentioned, a high water or conservative approach is most often going to have an organization put in a defensible position that they've identified the basic criteria that all of the laws kind of speak to, such as allowing consumers to access their data when requested or allowing consumers to request deletion of their data or opt out of the sale of certain targeted advertising activities. So like we said, identifying where those laws overlap is super important and making sure that you can call out the differences is additionally important as well. So something that some of these state privacy laws are identifying are the automated decision making details. So requirements about allowing individuals to opt out of any processes that do not directly involve human oversight. An example of this would be when an applicant applies to a business and there are certain criteria that might filter out certain candidates. So Virginia specifically has a clause within their state law on privacy that mentions an individual has the right to request not to be part of that processing.

So that's a very high level overview of some of the regulatory updates that the US state laws are requiring and as the state laws go into effect, we're currently under California and Virginia as of July 1st. So just in a couple of days, Colorado and Connecticut go into effect, Utah by the end of the year and the rest of these states going into effect within the next couple of years. I want to transition into some of the new updates that these state laws are introducing to organizations. Some of the complexities come around where individual rights need to be respected and responded to, but also some of the internal processes that organizations need to start considering when they are trying to be compliant with some of these privacy laws. PIAs were originally introduced under the GDPR out of Europe, so it's a privacy impact assessment and it's basically an internal process that organizations conduct in order to identify any time personal information is being processed, collected.

They want to make sure that a PIA is conducted so that the scope is defined on what that information is going to be used for and other details. So these state requirements are following the GDPR guidance in making sure that organizations are using these PIAs to address the processing of personal information, but also identifying potential risks when processing that information. So in order to prepare for a PIA, there are several criteria that an organization can work towards to make the process a little bit easier if they have identified that they have triggered the need for a PIA. Again, different states have different stipulations and requirements, and so knowing which state would require a PIA and what type of processing activity would trigger a PIA A to be completed is super important. So the first thing an organization would need to do to prepare for completion of a PIA is to first identify it if it's required at all.

If personal information is not being processed, a PIA is most likely not needed. So that's called a threshold analysis is identifying, okay, is this process triggering a PIA or not? Is it processing personal information of our consumers? Additionally, something that would assist an organization to prepare for a PIA is maintaining records of processing activities. This is another requirement under the GDPR law that applies to any company that is processing European data, but it is something that states are trending towards as well with just making sure that you are keeping track of the intention of why you are processing certain information. An example of this could just be I am processing an employee's personal sensitive information, but it is for the processing activity of submitting payroll. So that's just one example. If you as an organization are maintaining these RPA or records of processing activity, it's going to make completing a PIA assessment that much easier.

So now I'm going to pass it off to Ray and he'll be able to go over some of the details on how you would actually work through completing a PIA internally.

Ray Soriano:

Thank you, Molly. So in doing a privacy impact assessment or a PIA, this was actually introduced as part of the data protection impact assessment from a GDPR perspective. Article 35 actually outlined the indication that there needs to be a PIA when there's a result in potentially high risk issues related to the rights of the freedoms of natural parties or persons. So in that PIA assessment as it's outlined here, it's trying to understand a little bit about the scope and the purpose of the assessment. GDPR actually outlined a few criteria as part of VITs articles to help with determining purpose for performing the PIA. Some of those criteria included things like profiling or decisions that potentially could lead to legal kind of course or issues or discourse, systematic monitoring if that's part of the processing. Additionally, maybe large scale processing of large volumes of data that are related to consumers for inform or other individuals that are part of your Europe merging and combining of different data elements and sources could be considered part of the scope and why there would need to be a PIA.

And so there are other criteria as well that are outlined in GDPR, but essentially upfront defining the scope and the purpose of the assessment is part of the primary steps that are necessary with the PIA. And also knowing, as Molly had mentioned, there are very specific state privacy requirements. There could be regulatory requirements from an industry perspective, HIPAA being a notable one. So kind of putting that in perspective as part of the overall assessment. The next phase of that is then getting into kind of a flow, a data flow analysis and really understanding and appreciating how information is being stored, managed, how it's being processed either in flight and being transferred, how it's being used and shared, and then also even from the perspective of how it's being removed or disposed. And so all of those data flows are part of the overall analysis that needs to be taking place.

Typically, this could be done in a variety of formats. It could be done in logical diagrams and views of how the data life cycle is performed. It could also be commentary and narratives that are provided to that, but certainly there are descriptors that can provide an understanding of how the data flow is. Then as part of a third stage or step, if you will, of the impact analysis or assessment that is, you would need to understand a little bit of the needs for purpose of those data elements, so how it's being processed, and particularly if it's consumer oriented, if it's used for marketing purposes, if it's used for customer service purposes, really kind of outlining the needs and the drivers for that.

There's also a risk criteria that's associated with data, whether it's a high, medium, low or if it's other factors that are basically customized for the impact assessment. Those need to be outlined. And then how the data is actually collected and purposed and shared with the various subject matters that really needs to come into play and understand the overall impact. And then finally, there's the detail plans on how you mitigate the risks that are identified from the analysis. Basically understanding what concerns safeguards are not being applied and that need to be addressed.

So I'm also going to move into a little bit of the regulatory updates from a global perspective now that we kind of understand a general process for a PIA. And as we mentioned, GDPR was really kind of the stimulus for the emphasis of doing a PIA. One of the notable issues related with GDPR is essentially ensuring that there is a proper handling of and transfer of personal information that's being passed between the EU and the US. In this particular instance, there was a notable event that occurred with Meta who essentially didn't properly handle the EU person data as far as the transfer from Europe to US servers. And so, one of the data protection agencies out of Ireland essentially find or identified the issue and find Meta with a 1.2 billion euro or 1.3 million US for violating their data transfer rules specifically.

So under the governing rules of the standard contract clauses that are part of the EU privacy framework, those that really came into play. And so from a US perspective, there is definitely the framework that is being kind of taking over and overshadowing the prior privacy shield framework that was in place. And one of the challenges with the privacy framework was that it didn't really address the cross or foster the Trans Atlantic data flow and address concerns with that the court of justice for the EU had identified as part of data transport between EU exporters to US importers.

Finally, there is the other novel and new EU Artificial Intelligence Act that was recently passed, which now this is probably going to take a much more focused view on different types of technology that are out there related to the handling and use of information that is used in artificial intelligence systems. This is the first of any AI type of privacy requirement that's out there. It's the world's first comprehensive law, if you will, and it is essentially focused on ensuring that AI systems are essentially handling EU data in a safe and transparent, traceable and non-discriminatory way. And it also has provisions in there that are focused on vendor neutrality, agnostic kind of capabilities or focus to provide uniformity in the definition of how AI systems are going to handle privacy. So with that, I'm going to move to the next polling question that's out there.

Astrid Garcia:

Polling question #3.

Paul Douglas:

And while folks are responding, I'll add a little bit more to what Ray was discussing around Meta and this cross-border transfer complexity. A lot of organizations are doing the exact same thing that Meta was doing. There's been a lot of changes over the years in terms of what is that legal basis or what is a compliant way to have a cross border transfer between the EU and the US. And it's been changing. At one point in time, the FTCs privacy shield product was a way to do that, and that was ultimately determined as non-compliant, standard contractual clauses became the next path. But then there's been various cases since then that made that non-compliant. So certainly the regulators are starting with some of the larger folks initially, but I think it's important to note that a lot of folks are doing the exact same thing and they're trying to figure out a compliant way that cross-border shared between the EU and us. So see how this continues to evolve and where the guidance eventually lands for what is a compliant way to do that.

Astrid Garcia:

Thank you, Paul. I will now be closing the polling question. Please make sure you submitted your answer. back to you guys.

Ray Soriano:

Awesome. Yeah, so it's great to see that many organizations are looking at artificial intelligence. I know recently there has been a plethora of interest for artificial intelligence in variety of forms, more from consumer customer relationship management perspective. Essentially chatbots, generative AI has come into the mix and is becoming more prevalent. A lot of organizations are seeking and utilizing that type of capability in some of the interaction that they have with consumers. So it's remarkable to see that it's starting to come up. While the polls here indicate that there's not presently utilizing artificial intelligence, I would also challenge that there are some capabilities that people aren't aware that are using artificial intelligence and a lot of the chatbots in itself and research information that is being provided for companies to provide insights about policies or general information about the organization. Sometimes that is underlying, there's artificial intelligence that's kind of baked into that.

So while we see here in a poll half say yes, half say no, I think it's kind of a growing area of focus. So with that, I'm going to give a little bit of a broad definition of artificial intelligence and provide some perspective of why AI is actually becoming more of a focus, a topic for privacy considerations. In its simplest form, if you will, artificial intelligence is a field which provides or combines computer science and robust information or data sets to provide the ability to do certain things of such as problem solving, research, training, learning, and other various types of outcomes for an organization, for an entity. Part of the machine learning, if you will, is part of AI and it is usually mentioned in conjunction with AI. And this is essentially to help provide algorithms, if you will, to provide that human interaction or human capabilities or characteristics of learning and improving based on experience and results that are obtained from all of the artificial intelligence.

We talked a little bit about the generative AI models that are out there. Again, ChatGPT is a common or well known, but there are other formats of generative AI that's out there that have the abilities to generate large volumes of information, unstructured data, if you will, whether in images or other content. And that can be then built and used to create training capabilities, curriculum, or even things like art forms. DALL-E is another generative AI type solution that's out there. It's very popular because of the multimodal kind of capabilities of that solution to pull in media and concepts of text and audio into the mix as well.

And then the other final point I'll bring up about artificial intelligence of different systems, it doesn't, in a framework, it doesn't necessarily have to be one single solution. It could be kind of learning that is conducted through different models and interaction between either artificial intelligence and non-artificial intelligence type technologies. So inputs that could feed into artificial intelligence sources of truth source, other data sources could be part of what they consider an artificial intelligence solution or system. And I'm going to pass it back over to Molly to kind of highlight some of the data privacy concerns.

Molly Grant:

Thank you, Ray. So I like to consider myself a glass half full type of person. So as a privacy professional, a lot of our clients are reaching out to see what some of the benefits they could utilize with some of these artificial intelligence technologies. They're really interested in the capabilities that it could provide to their organizations. As a privacy professional, I do want to speak to some of the ways that AI can be used to really automate some of the processes that are involved with privacy compliance in general. So advanced data discovery is something that many organizations have to address with some of the access requests that different privacy regulations require when an individual or a consumer or now even an employee, when they reach out to obtain certain pieces of information. Organizations need to be able to quickly go through large unstructured data sets to identify certain criteria data fields, and this is an area where automated and artificial technologies can be implemented.

Additionally, you could automate privacy processes in general, and that could be as simple as the responses once someone, it's called a data subject access request or a data subject request. Those processes and technologies can be implemented at an organization to be able to respond quickly and efficiently to individuals accessing or requesting the deletion of their data. AI can also be used to identify gaps in compliance. There are many ways that different technologies are able to map compliance and regulatory guidance and basically address it to certain criteria that an organization prioritizes. Additionally, there are some security and threat detection elements that different machine learning technologies can really be used to provide that protection to the data that they collect and store. So these are just some ways that data privacy, if it is in involved with artificial intelligence processes within an organization, it can really support and benefit some of the privacy processes.

Paul Douglas:

You could also argue, Molly, from a security perspective, it would be very difficult to survive without AI when you think about just the velocity of an attack. So threat actors are using AI themselves to orchestrate attacks, if you don't have those detection capabilities through usage of AI and sometimes AI we have it today in our organization, you think about the way our teams are operating, those security incident management tools. There is AI capabilities baked into a lot of these to be able to quickly detect incidents on our network and at our organization. So AI is important.

Ray Soriano:

Yeah, AI is definitely important. We're starting to see obviously a lot more harmonization of how organizations are using AI in the cyber front to confront and battle just those type of issues because you're right, the adversaries aren't, they're relentless and they're going to continue to focus and utilize techniques and tactics just as they can to try to infiltrate systems. And so as part of the defense measures, security operations centers are utilizing orchestration technology in combination with managing and looking at their endpoints and other types of mechanisms within their environment to see if they can thwart the issues that are coming from the adversaries or the attackers that are out there. So yes, you're seeing a lot more harm harmonization and understanding of how to utilize automation and artificial intelligence to kind of help combat that.

Paul Douglas:

All right, Molly, tell us about the concerns now.

Molly Grant:

Awesome. So like I said, I am glass half full. I'm also a realist. So that there are tons of ways that privacy risks can be introduced by using these artificial intelligence. And as long as organizations are aware of these risks, that's the first step. So certain things that artificial intelligence introduces is basically being able to say that this artificial intelligence was created with privacy rights in mind. So this is really where privacy by design comes into play. Paul mentioned briefly that privacy by design is basically a privacy principle where privacy rights of individuals is baked into the process whenever a system or the processing of personal information is being created.

This really speaks to harmonizing different teams within an organization, specifically tech teams, security teams, and privacy individuals as well as compliance or even legal. So making sure everyone is in the earliest stages of these products as opposed to being an afterthought, a privacy professional would have a difficult time trying to go back and make sure that AI that was created is hitting all of the best practices like data minimization, making sure that this information that is collected is only used for specific purposes.

Additionally, there are some potential risks with AI just in general, you'll hear it referred to as black box is basically not understanding every part of why an output was generated. So any kind of issue with that is something a privacy professional or even an organization really wants to pay attention to because transparency is very important with respecting privacy rights. Additionally, proper notification, a lot of these AI models require extremely large sums of training data in order to be able to teach itself and provide valuable feedback and outputs. However, in some cases, a lot of the people that are putting their information into AI models, specifically generative AI models these days, as with like ChatGPT notification is not really thoroughly conveyed a lot of the time. So there's the possibility that without proper training or without proper disclosure or notice, a lot of individuals could be feeding proprietary information into some of these AI models.

It could be sensitive information, it could be an organizational information that's being fed into them. So additionally, there are options to, or rather risks to the information that you are feeding into an AI model. As with generative AI, there's potential that if you are feeding or inputting sensitive information or personal information or otherwise proprietary information into an AI model, there's a chance that the AI model is reading that information, is collecting it, understanding it, and there's then potential for improper disclosure of that information as a response or an output in some cases. So that's something as a privacy professional or an organization that if you are looking into these kind of tools, these are the questions that these organizations should be asking themselves is where does the responsibility lie if we are using AI, who's making sure that privacy rights are kind of top of mind here?

Paul Douglas:

Yeah, I'll give an example on the data bias side. We've seen a number of organizations start to use AI for as part of their screening process for applicants. So when you're looking at the recruiting efforts you have and using AI to do that initial applicant review based on what that, based on what the AI is recommending and how heavily you're leveraging that, I could certainly see some future litigation in that area to how do we know that AI, that the way that AI algorithm was configured, that it was compliant with equal opportunity and other labor law considerations. So again, if you're an organization that's dealing with thousands of applicants, this could be a helpful way to efficiently work through those. That could also be a way that would introduce some risk. So just something to be mindful of.

Molly Grant:

Definitely. And I saw an interesting question come through, so I wanted to take a moment to answer that. So someone asked, how are organizations addressing user accountability when bots or AI tools produce undesirable results? I'm interested in your thoughts on that Paul, but I know that a lot of our organizations regarding user accountability, they're putting forth documentation and internal policies on what is acceptable if you are using something like generated and what information might be shared.

Paul Douglas:

Yeah, I mean think it's certainly you have to have a strong policy around it. What type of monitoring capabilities? Because it's like anything else. Sometimes things go wrong despite our best efforts and what we did on the front end to help mitigate that risk will perhaps go a long way later. I guess it depends also on what is the undesirable result we're talking about. So the question looks like it came in from a healthcare provider, you have a decision support AI right now. What if that undesirable outcome was a bad treatment plan or a bad clinical decision that was made within your hospital? Now we're looking at CMS quality issues. So I think it depends on what that undesirable outcome is, but certainly having a lot of the front end and the testing, like Molly was saying, making sure that you're testing these technologies and trying to manage the risk to the best of your ability.

Molly Grant:

All right. So now I want to touch on some of the top privacy risks that we are seeing in 2023. A lot of these are related to certain privacy risks criteria that are coming in regulation. One specific item that I want to call out, some organizations might be familiar with cookies. So cookie compliance on your website is basically collecting user data if they visit your site. Replacing cookies are now something referred to as tracking pixels. A tracking pixel is slightly different from a cookie, which just collects user data on the web browser. A tracking pixel is a little bit more interesting and introducing a little bit more risk because it can follow you from device to device. It is not restricted to the browser that you're on. So I'm going to advance to some of the risks that we are seeing with some of these tracking technologies where organizations are collecting data and they are experiencing some enforcement on respecting privacy rights of individuals.

Certain hospitals or medical institutions are facing class action lawsuits for using the megapixel, which is actually a tracking tool that was collecting user data when they were attempting to schedule a doctor's appointment or if they were just visiting the medical site, they would pass that information on to Facebook or Meta. So that is something that organizations should just be aware of. Is that something like that? You would have to decide if that is truly a HIPAA violation depending on the information that was shared. It's also getting down to is this technically a sale of data or is this just sharing data? So that's something that organizations should really pay attention to.

Some other enforcement acts regarding tracking information or just improper handling of individuals information was the cosmetics company Sephora. So they were recently fined under the California privacy law for selling information such as location data or sharing the items that you had added to your cart and selling that to targeted advertisers without properly disclosing that. Additionally, something I wanted to briefly touch on was something called the global privacy control. That's basically a web browser setting that you can select right now to say I'm not interested in any collection of advertising for advertising purposes.

So those were just some examples of ways organizations should pay attention to how they're tracking the individuals.

Ray Soriano:

So Molly, I think you've touched upon many challenges that organizations just based on those examples and throughout the conversation that we've been having today, certainly there's some common thematic or themes that we're seeing related to concerns related to privacy. Some of that is notably within just overall awareness as far as what is acceptable as part of providing information, knowing the appropriate safeguards and protection measures that are associated with handling of sensitive private information. The other thing that we brought up is some elements about compliance requirements and measures that need to be factored. We talked about GDPR, we talked about state law and how there's this overwhelming complexity of all these different types of privacy laws that are coming to play that need to be factored. So not to jump around on this slide here, but those are kind of key elements that have already been highlighted here.

And then other things that are inherently part of the challenge for providing appropriate privacy safeguards tie in with the level of security that we're utilizing. I see that there's been some chat or questions related to the level of encryption providing an appropriate level of just basic cyber, more security type of hygiene in the environment, managing weak passwords and trying to minimize the unfettered access, if you will, maybe even access controls. But encryption seems to resonate. And as far as what are secure forms of encryption in its itself, for example, SFTP is not by itself a complete solution. There has to be the higher forms of encryption algorithms that are utilized with SFTP AES-256 is notably something that could be used, part of that.

The other element here that is a challenge that companies have to deal with is really dealing with the overall kind of handling or governance of how you handle and manage data and how you classify that data and how you monitor what type of monitoring mechanisms are in place to routinely oversee this information and also monitoring it from all the data, the privacy laws that are out there because they're evolving as well. And so the lot of these things are common across many organizations, not limited to one particular segment or another.

And then the final thing I would note here, again, sorry to kind of jump around here, but third party risks. There's the handling of information, it's not necessarily within the organization, but the parties that are trusted parties or third parties that are providers of handling and custodians of handling the data itself.

Paul Douglas:

All right. Now, we're going to talk privacy best practices in building a program and we'll transition to our last polling question. I have very big feelings about this. Those who have worked with me know I have very big feelings about this one. So I've been in this privacy space a little around 15 years now, and today I truly do feel that the underpinning of all of this is data governance. So when we're talking about how do we not only implement some of these new processes just for certain areas, how do we scale it across all departments, all data sets. I think really the answer, it's not an easy button answer, but I do believe it's the answer and it's data governance. I think data governance is such an important part of scaling your data privacy program across your organization and across all data sets.

There are some principles that exist within data governance that I think are very tried and true within the security world. If you're a security framework nerd like I am, I'm just a framework nerd in general, you will notice all these frameworks they start with taking an inventory of your assets. You can only protect what you know. You can only manage or you can only monitor what you're managing. That inventory is so key. That's why I love RPA. For those of you who have been hanging on since the beginning of this webinar, Molly was talking about a record of processing activities and I love it. I think the RPA is the privacy equivalent of an IT asset inventory. You're taking inventory of all the ways by which you're processing data. That is very, very key for then applying the appropriate data privacy practices. I would kind of equate this to those of you who have to deal with PCI, most of us who have to comply with the PCI standards, we would never be able to achieve that across our entire network.

Typically we segment that PCI portion of our network away to then be able to apply the level of stringency that's needed to process that cardholder data. The same thing can be said about some of these other data sets, some of these other laws that are so stringent, you may not be able to apply it to every single data set, but how do you ensure that you're applying the right requirements or the right processes, the right controls to the right data? Data governance. And I think that RPA is very key, the record of processing activities. So for the framework nerds on the call right now or on the webinar, I really do believe that that is the equivalent of building out your IT asset inventory and it's so foundational to making sure that you're doing the right things in the right places.

So when you go forth and you start building this privacy program, there's a lot of things that will look familiar here. But also it's just the privacy the leanness, the privacy view of things. I liked what Ray was saying about encryption. Encryption I think is a great kind of marriage between security and privacy. Encryption is one of the oldest and most important security safeguards. It's also your safe harbor from a privacy perspective if implemented properly. So I mentioned PCI, if you're using point to point encryption technologies, that's a security feature that's giving you safe harbor from a PCI data privacy perspective. If you're using encryption of other data sets, those laws will recognize that from a privacy perspective.

So just that trend of finding the overlap between security and privacy in your organizations and coordinating those efforts centrally or orchestrating them together, very, very key. So when you set forth to build this program, there's a lot to consider, but there's also a lot of existing processes you have. It's just applying that privacy view to it and having the data governance underneath it to scale it across your organization. So it's not an easy button, but I think it's a button you can press. And with that we'll go to our last polling question.

Astrid Garcia:

Polling question #4.

Paul Douglas:

You have to load these subjective questions. There's no right or wrong answer. I think it also changes over time a little bit. So it'll be interesting to see what the folks online today have to say.

Astrid Garcia:

I will now be closing the polling question. Please make sure you've submitted your answer.

Paul Douglas:

Yeah, and that top challenge, I think we would all agree that's what we've been saying from a security perspective all along. It's securing the, us humans.

Ray Soriano:

The human factor is always the issue really when it comes to this. It's the biggest challenge, user adoption, if you will, and an understanding.

Paul Douglas:

Yeah, I would challenge folks though not to overlook the vendor risk. Vendors are certainly a very common area for privacy violations and concerns because you may be implementing the best practices on your side. How do you know they're being good custodians of that data? So we're reaching the end of our presentation here. This is what I would like to say in closing and certainly for our group of speakers here, weigh in, privacy has become really a complex matter that requires a variety of resources, a good collaboration across your organization. It's no longer a matter that's kind of buried in compliance. It's no longer an issue that your privacy officer is kind of handling in isolation. It's certainly a topic that's requiring collaboration across the organization, making sure that you have the right stakeholders involved and really finding the right partners, whether it be external consultants, law firms, and others to help shepherd this process and really put you in the best position possible.

Ray Soriano:

Yeah, I totally agree, Paul, a multidisciplinary approach is best when it comes to looking at this because like you said, it's not only limited in just compliance, you do need to look at other stakeholders that have involvement with storing, handling, use of data.

Paul Douglas:

All right, excellent. Well that, I think that brings us to the end here. We are at the end of our presentation, the end of time. Astrid will pass it back to you to close us out.

Transcribed by Rev.com

 

What's on Your Mind?


Start a conversation with the team

Receive the latest business insights, analysis, and perspectives from EisnerAmper professionals.