Which of the following is an ethical issue caused by the advancement in data storage techniques?

While these aren’t the only challenges, here are five areas of concern the technology industry is currently facing. Steps are being taken, but is it enough?

Data usage: According to the UN, 128 of 194 countries currently have enacted some form of data protection and privacy legislation.1 Even more regulation and increased enforcement are being considered.2 This attention is due to multiple industry problems including abuse of consumer data and massive data breaches. Until clear and universal standards emerge, the industry continues to work toward addressing this dilemma. This includes making data privacy a core tenet and competitive differentiator, like Apple, which recently released an app tracking transparency feature.3 We’re also seeing greater market demand, evident by the significant growth of the privacy tech industry.4 Will companies simply do the minimum amount required to comply with data-related regulations, or will they go above and beyond to collect, use, and protect data in a more equitable way for everyone?

Environmental sustainability: There’s a push for technology companies to go beyond what’s required by law on environmental sustainability. There are those who challenge the industry for its energy use, supply chains that could be more efficient, manufacturing waste, and water use in semiconductor fabrication. The good news is technology companies have the market power to create significant change. Tech companies are some of the largest buyers of renewable energy in the world and are working to run their massive data centers off that energy.5 Some focus on zero waste initiatives, improving recycling and promoting circular economy principles. Cisco’s Takeback and Reuse program and Microsoft’s 2030 zero waste goal are examples.6 Others work toward net-zero carbon through The Climate Pledge, spearheaded by Amazon, or individual efforts, such as Apple’s pledge to become carbon-neutral across its businesses by 2030.7

Trustworthy AI: The rapid deployment of AI into societal decision-making—from health care recommendations to hiring decisions and autonomous driving—has catalyzed an ongoing ethics conversation. It’s increasingly important that AI-powered systems operate under principles that benefit society and avoid issues with bias, fairness, transparency, and explainability. To address these issues, we’ve seen tech industry players establish advisory panels, guiding principles, and the sponsoring of academic programs.8 We’ve also seen action beyond statements of principle. Some larger tech players decided in 2020 to stop providing AI-powered facial recognition systems to police departments until clear guidelines, or legislation, is in place.9 This is a solid foundation to build on, but faith in the industry is low.10 As a consequence, we see a growing potential for government action and regulation, such as the EU’s proposed Artificial Intelligence Act and recent statements from the Federal Trade Commission in the United States.11

Threats to truth: There are hordes of people and groups using disinformation, misinformation, deepfakes, and the weaponizing of data to attack, manipulate, and influence for personal gain, or to sow chaos. To help address this intractable issue, technology companies have asked governments to pass regulations clearly outlining responsibilities and standards.12 They’re also cooperating more with law enforcement and intelligence agencies, publishing public reports of their findings, and increasing overall vigilance and action.13 In addition, many companies have signed up for the EU’s voluntary Code of Practice on Disinformation, which is currently being strengthened.14 Is this all happening fast and comprehensively enough and with enough forethought?

Physical and mental health: The technology industry can not only impact the physical and mental well-being of customers who use and overuse its products and services, but also by its direct involvement in health care, which has been accelerated by the pandemic.15 We’re still working to better understand the impacts of technology on health, and a lot of research and debate are ongoing.16 Although measuring the impact of both is difficult and complex, the technology industry has shown it can improve health-related areas with tech such as wearables, and through better access to providers through telehealth, sensors, devices, and apps for chronic disease monitoring, and improving diagnoses through advanced analytics and AI.

Addressing these dilemmas is critically important, but what concerns technology industry leaders the most at the moment? In a Deloitte survey of technology industry professionals, the vast majority found all the dilemmas critical, but data privacy was seen as the most (figure 1).17 This focus could be because of the current regulatory landscape. The issue is more real for leaders and can have an impact on their day-to-day operations. The other dilemmas may be seen as impacting their organization further in the future or, are simply more nebulous.

Data can be used to drive decisions and make an impact at scale. Yet, this powerful resource comes with challenges. How can organizations ethically collect, store, and use data? What rights must be upheld? The field of data ethics explores these questions and offers five guiding principles for business professionals who handle data.

What Is Data Ethics?

Data ethics encompasses the moral obligations of gathering, protecting, and using personally identifiable information and how it affects individuals.

“Data ethics asks, ‘Is this the right thing to do?’ and ‘Can we do better?’” Harvard Professor Dustin Tingley explains in the Harvard Online course Data Science Principles.

Data ethics are of the utmost concern to analysts, data scientists, and information technology professionals. Anyone who handles data, however, must be well-versed in its basic principles.

For instance, your company may collect and store data about customers’ journeys from the first time they submit their email address on your website to the fifth time they purchase your product. If you’re a digital marketer, you likely interact with this data daily.

While you may not be the person responsible for implementing tracking code, managing a database, or writing and training a machine-learning algorithm, understanding data ethics can allow you to catch any instances of unethical data collection, storage, or use. By doing so, you can protect your customers' safety and save your organization from legal issues.

Here are five principles of data ethics to apply at your organization.

Free E-Book: A Beginner's Guide to Data & Analytics

Access your free e-book today.

DOWNLOAD NOW

5 Principles of Data Ethics for Business Professionals

1. Ownership

The first principle of data ethics is that an individual has ownership over their personal information. Just as it’s considered stealing to take an item that doesn’t belong to you, it’s unlawful and unethical to collect someone’s personal data without their consent.

Some common ways you can obtain consent are through signed written agreements, digital privacy policies that ask users to agree to a company’s terms and conditions, and pop-ups with checkboxes that permit websites to track users’ online behavior with cookies. Never assume a customer is OK with you collecting their data; always ask for permission to avoid ethical and legal dilemmas.

2. Transparency

In addition to owning their personal information, data subjects have a right to know how you plan to collect, store, and use it. When gathering data, exercise transparency.

For instance, imagine your company has decided to implement an algorithm to personalize the website experience based on individuals’ buying habits and site behavior. You should write a policy explaining that cookies are used to track users’ behavior and that the data collected will be stored in a secure database and train an algorithm that provides a personalized website experience. It’s a user’s right to have access to this information so they can decide to accept your site’s cookies or decline them.

Withholding or lying about your company’s methods or intentions is deception and both unlawful and unfair to your data subjects.

3. Privacy

Another ethical responsibility that comes with handling data is ensuring data subjects’ privacy. Even if a customer gives your company consent to collect, store, and analyze their personally identifiable information (PII), that doesn’t mean they want it publicly available.

PII is any information linked to an individual’s identity. Some examples of PII include:

  • Full name
  • Birthdate
  • Street address
  • Phone number
  • Social Security card
  • Credit card information
  • Bank account number
  • Passport number

To protect individuals’ privacy, ensure you’re storing data in a secure database so it doesn’t end up in the wrong hands. Data security methods that help protect privacy include dual-authentication password protection and file encryption.

For professionals who regularly handle and analyze sensitive data, mistakes can still be made. One way to prevent slip-ups is by de-identifying a dataset. A dataset is de-identified when all pieces of PII are removed, leaving only anonymous data. This enables analysts to find relationships between variables of interest without attaching specific data points to individual identities.

Related: Data Privacy: 4 Things Every Business Professional Should Know

4. Intention

When discussing any branch of ethics, intentions matter. Before collecting data, ask yourself why you need it, what you’ll gain from it, and what changes you’ll be able to make after analysis. If your intention is to hurt others, profit from your subjects’ weaknesses, or any other malicious goal, it’s not ethical to collect their data.

When your intentions are good—for instance, collecting data to gain an understanding of women’s healthcare experiences so you can create an app to address a pressing need—you should still assess your intention behind the collection of each piece of data.

Are there certain data points that don’t apply to the problem at hand? For instance, is it necessary to ask if the participants struggle with their mental health? This data could be sensitive, so collecting it when it’s unnecessary isn’t ethical. Strive to collect the minimum viable amount of data, so you’re taking as little as possible from your subjects while making a difference.

Related: 5 Applications of Data Analytics in Health Care

5. Outcomes

Even when intentions are good, the outcome of data analysis can cause inadvertent harm to individuals or groups of people. This is called a disparate impact, which is outlined in the Civil Rights Act as unlawful.

In Data Science Principles, Harvard Professor Latanya Sweeney provides an example of disparate impact. When Sweeney searched for her name online, an advertisement came up that read, “Latanya Sweeney, Arrested?” She had not been arrested, so this was strange.

“What names, if you search them, come up with arrest ads?” Sweeney asks in the course. “What I found was that if your name was given more often to a Black baby than to a white baby, your name was 80 percent more likely get an ad saying you had been arrested.”

It’s not clear from this example whether the disparate impact was intentional or a result of unintentional bias in an algorithm. Either way, it has the potential to do real damage that disproportionately impacts a specific group of people.

Unfortunately, you can’t know for certain the impact your data analysis will have until it’s complete. By considering this question beforehand, you can catch any potential occurrences of disparate impact.

Ethical Use of Algorithms

If your role includes writing, training, or handling machine-learning algorithms, consider how they could potentially violate any of the five key data ethics principles.

Because algorithms are written by humans, bias may be intentionally or unintentionally present. Biased algorithms can cause serious harm to people. In Data Science Principles, Sweeny outlines the following ways bias can creep into your algorithms:

  • Training: Because machine-learning algorithms learn based on the data they’re trained with, an unrepresentative dataset can cause your algorithm to favor some outcomes over others.
  • Code: Although any bias present in your algorithm is hopefully unintentional, don’t rule out the possibility that it was written specifically to produce biased results.
  • Feedback: Algorithms also learn from users’ feedback. As such, they can be influenced by biased feedback. For instance, a job search platform may use an algorithm to recommend roles to candidates. If hiring managers consistently select white male candidates for specific roles, the algorithm will learn and adjust and only provide job listings to white male candidates in the future. The algorithm learns that when it provides the listing to people with certain attributes, it’s “correct” more often, which leads to an increase in that behavior.

“No algorithm or team is perfect, but it’s important to strive for the best,” Tingley says in Data Science Principles. “Using human evaluators at every step of the data science process, making sure training data is truly representative of the populations who will be affected by the algorithm, and engaging stakeholders and other data scientists with diverse backgrounds can help make better algorithms for a brighter future.”

Which of the following is an ethical issue caused by the advancement in data storage techniques?

Using Data for Good

While the ethical use of data is an everyday effort, knowing that your data subjects’ safety and rights are intact is worth the work. When handled ethically, data can enable you to make decisions and drive meaningful change at your organization and in the world.

Are you interested in furthering your data literacy? Download our Beginner’s Guide to Data & Analytics to learn how you can leverage the power of data for professional and organizational success.