Showing posts with label Online privacy. Show all posts
Showing posts with label Online privacy. Show all posts

Wednesday, April 8, 2020

Using Artificial Intelligence and Algorithms

 

Using Artificial Intelligence and Algorithms

 

By: Andrew Smith, Director, FTC Bureau of Consumer Protection

April 8, 2020

Republication

 

AI, Algorithms, Consumer, Privacy, Data, Bogus “Likes”, Chatbot, Facial Recognition, Voice-cloning Technologies, Title VII, Disparate Impact

 

How companies can manage the consumer protection risks of AI and algorithms.

 

 

 

BE TRANSPARENT.

 

Don’t deceive consumers about how you use automated tools. Oftentimes, AI operates in the background, somewhat removed from the consumer experience. But, when using AI tools to interact with customers (think chatbots), be careful not to mislead consumers about the nature of the interaction. The Ashley Madison complaint alleged that the adultery-oriented dating website deceived consumers by using fake “engager profiles” of attractive mates to induce potential customers to sign up for the dating service. And the Devumi complaint alleged that the company sold fake followers, phony subscribers, and bogus “likes” to companies and individuals that wanted to boost their social media presence. The upshot? If a company’s use of doppelgängers – whether a fake dating profile, phony follower, deepfakes, or an AI chatbot – misleads consumers, that company could face an FTC enforcement action.

 

Be transparent when collecting sensitive data. The bigger the data set, the better the algorithm, and the better the product for consumers, end of story…right? Not so fast. Be careful about how you get that data set. Secretly collecting audio or visual data – or any sensitive data – to feed an algorithm could also give rise to an FTC action. Just last year, the FTC alleged that Facebook misled consumers when it told them they could opt in to facial recognition – even though the setting was on by default. As the Facebook case shows, how you get the data may matter a great deal.

 

If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice. Under the FCRA, a vendor that assembles consumer information to automate decision-making about eligibility for credit, employment, insurance, housing, or similar benefits and transactions, may be a “consumer reporting agency.” That triggers duties for you, as the user of that information. Specifically, you must provide consumers with certain notices under the FCRA. Say you purchase a report or score from a background check company that uses AI tools to generate a score predicting whether a consumer will be a good tenant. The AI model uses a broad range of inputs about consumers, including public record information, criminal records, credit history, and maybe even data about social media usage, shopping history, or publicly-available photos and videos. If you use the report or score as a basis to deny someone an apartment, or charge them higher rent, you must provide that consumer with an adverse action notice. The adverse action notice tells the consumer about their right to see the information reported about them and to correct inaccurate information.

 

EXPLAIN YOUR DECISION TO THE CONSUMER.

 

If you deny consumers something of value based on algorithmic decision-making, explain why. Some might say that it’s too difficult to explain the multitude of factors that might affect algorithmic decision-making. But, in the credit-granting world, companies are required to disclose to the consumer the principal reasons why they were denied credit, and it’s not good enough simply to say “your score was too low” or “you don’t meet our criteria.” You need to be specific (e.g., “you’ve been delinquent on your credit obligations” or “you have an insufficient number of credit references”). This means that you must know what data is used in your model and how that data is used to arrive at a decision. And you must be able to explain that to the consumer. If you are using AI to make decisions about consumers in any context, consider how you would explain your decision to your customer if asked.

 

If you use algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for importance. Similar to other algorithmic decision-making, scores are based on myriad factors, some of which may be difficult to explain to consumers. For example, if a credit score is used to deny someone credit, or offer them less favorable terms, the law requires that consumers be given notice, a description of the score (its source, the range of scores under that credit model), and at least four key factors that adversely affected the credit score, listed in the order of their importance based on their effect on the credit score.

 

If you might change the terms of a deal based on automated tools, make sure to tell consumers. More than a decade ago, the FTC alleged that subprime credit marketer CompuCredit violated the FTC Act by deceptively failing to disclose that it used a behavioral scoring model to reduce consumers’ credit limits. For example, if cardholders used their credit cards for cash advances or to make payments at certain venues, such as bars, nightclubs, and massage parlors, they might have their credit limit reduced. The company never told consumers that these purchases could reduce their credit limit – neither at the time they signed up nor at the time they reduced the credit limit. That decade-old matter is just as important today. If you’re going to use an algorithm to change the terms of the deal, tell consumers.

 

ENSURE THAT YOUR DECISIONS ARE FAIR.

 

Don’t discriminate based on protected classes. Cavalier use of AI could result in discrimination against a protected class. A number of federal equal opportunity laws, such as ECOA and Title VII of the Civil Rights Act of 1964, may be relevant to such conduct. The FTC enforces ECOA, which prohibits credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance. If, for example, a company made credit decisions based on consumers’ Zip Codes, resulting in a “disparate impact” on particular ethnic groups, the FTC could challenge that practice under ECOA. You can save yourself a lot of problems by rigorously testing your algorithm, both before you use it and periodically afterwards, to make sure it doesn’t create a disparate impact on a protected class.

 

Give consumers access and an opportunity to correct information used to make decisions about them. The FCRA regulates data used to make decisions about consumers – such as whether they get a job, get credit, get insurance, or can rent an apartment. Under the FCRA, consumers are entitled to obtain the information on file about them and dispute that information if they believe it to be inaccurate. Moreover, adverse action notices are required to be given to consumers when that information is used to make a decision adverse to the consumer’s interests. That notice must include the source of the information that was used to make the decision and must notify consumers of their access and dispute rights. If you are using data obtained from others – or even obtained directly from the consumer – to make important decisions about the consumer, you should consider providing a copy of that information to the consumer and allowing the consumer to dispute the accuracy of that information.

 

ENSURE THAT YOUR DATA AND MODELS ARE ROBUST AND EMPIRICALLY SOUND.

 

If you provide data about consumers to others to make decisions about consumer access to credit, employment, insurance, housing, government benefits, check-cashing or similar transactions, you may be a consumer reporting agency that must comply with the FCRA, including ensuring that the data is accurate and up to date. You may be thinking: We do AI, not consumer reports, so the FCRA doesn’t apply to us. Well, think again. If you compile and sell consumer information that is used or expected to be used for credit, employment, insurance, housing, or other similar decisions about consumers’ eligibility for certain benefits and transactions, you may indeed be subject to the FCRA. What does that mean? Among other things, you have an obligation to implement reasonable procedures to ensure maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors. RealPage, Inc., a company that deployed software tools to match housing applicants to criminal records in real time or near real time, learned this the hard way. The company ended up paying a $3 million penalty for violating the FCRA by failing to take reasonable steps to ensure the accuracy of the information they provided to landlords and property managers.

 

If you provide data about your customers to others for use in automated decision-making, you may have obligations to ensure that the data is accurate, even if you are not a consumer reporting agency. Companies that provide data about their customers to consumer reporting agencies are referred to as “furnishers” under the FCRA. They may not furnish data that they have reasonable cause to believe may not be accurate. In addition, they must have in place written policies and procedures to ensure that the data they furnish is accurate and has integrity. Furnishers also must investigate disputes from consumers, as well as disputes received from the consumer reporting agency. These requirements are important to ensure that the information used in AI models is as accurate and up to date as it can possibly be. And, the FTC has brought actions, and obtained big fines, against companies that furnished information to consumer reporting agencies but that failed to maintain the required written policies and procedures to ensure that the information that they report is accurate.

 

Make sure that your AI models are validated and revalidated to ensure that they work as intended, and do not illegally discriminate. Again, more lessons from the world of consumer lending, where credit-grantors have been using data and algorithms for decades to automate the credit underwriting process. The lending laws encourage the use of AI tools that are “empirically derived, demonstrably and statistically sound.” This means, among other things, that they are based on data derived from an empirical comparison of sample groups, or the population of creditworthy and noncreditworthy applicants who applied for credit within a reasonable preceding period of time; that they are developed and validated using accepted statistical principles and methodology; and that they are periodically revalidated by the use of appropriate statistical principles and methodology, and adjusted as necessary to maintain predictive ability.

 

HOLD YOURSELF ACCOUNTABLE FOR COMPLIANCE, ETHICS, FAIRNESS, AND NONDISCRIMINATION.

 

Ask questions before you use the algorithm. Going back to the 2016 Big Data report, the Commission warned companies that big data analytics could result in bias or other harm to consumers. To avoid that outcome, any operator of an algorithm should ask four key questions:

  • How representative is your data set?
  • Does your data model account for biases?
  • How accurate are your predictions based on big data?
  • Does your reliance on big data raise ethical or fairness concerns?

 

Protect your algorithm from unauthorized use. If you’re in the business of developing AI to sell to other businesses, think about how these tools could be abused and whether access controls and other technologies can prevent the abuse. For instance, just last month, the FTC hosted a workshop on voice-cloning technologies. Thanks to machine learning, these technologies enable companies to use a five-second clip of a person’s actual voice to generate a realistic audio of the voice saying anything. This technology promises to help people who have lost the ability to speak, among other things, but could be easily abused if it falls into the hands of people engaged in imposter schemes. One company that is introducing this cloning technology is vetting users and running the technology on its own servers so that it can stop any abuse that it learns about.

 

Consider your accountability mechanism. Consider how you hold yourself accountable, and whether it would make sense to use independent standards or independent expertise to step back and take stock of your AI. For example, going back to the algorithm that ended up discriminating against black patients, well-intentioned employees were trying to use the algorithm to target medical interventions to the sickest patients. Outside, objective observers who independently tested the algorithm were the ones who discovered the problem. Such outside tools and services are increasingly available as AI is used more frequently, and companies may want to consider using them.

 

Monday, February 4, 2013

Apple v. Super. Ct.



Internet: online privacy in California: finally, the California Online Privacy Protection Act of 2003 (COPPA) shows that the Legislature knows how to make clear that it is regulating online privacy and that it does so by carefully balancing concerns unique to online commerce.  COPPA provides that “an operator of a commercial Web site or online service that collects personally identifiable information through the Internet about individual consumers residing in California who use or visit its commercial Web site or online service shall conspicuously post its privacy policy on its Web site . . . .”  (Bus. & Prof. Code, § 22575, subd. (a).)  The privacy policy must: “(1) Identify the categories of personally identifiable information that the operator collects through the Web site or online service about individual consumers who use or visit its commercial Web site or online service and the categories of third-party persons or entities with whom the operator may share that personally identifiable information. (2) If the operator maintains a process for an individual consumer who uses or visits its commercial Web site or online service to review and request changes to any of his or her personally identifiable information that is collected through the Web site or online service, provide a description of that process.  (3) Describe the process by which the operator notifies consumers who use or visit its commercial Web site or online service of material changes to the operator’s privacy policy for that Web site or online service. (4) Identify its effective date.”  (Bus. & Prof. Code, § 22575, subd. (b).)
COPPA requires online retailers to “conspicuously post” their privacy policies, to disclose “the categories of personally identifiable information” they collect, and to identify “the categories of third-party persons or entities with whom they may share that personally identifiable information.”  (Bus. & Prof. Code, § 22575, subds. (a), (b).)  If a consumer is not satisfied with the policy of a particular retailer, he or she may decline to purchase a product from that retailer.  The Legislature could have reasonably believed that its disclosure regime creates significant incentives, in light of consumer preferences, for online retailers to limit their collection and use of personally identifiable information.
Federal law also provides some degree of protection against the use of personal identification information for unwanted commercial solicitation.  The Telephone Consumer Protection Act of 1991 (TCPA; Pub.L. No. 102–243 (Dec. 20, 1991) 105 Stat. 2394) was enacted “to protect the privacy interests of residential telephone subscribers by placing restrictions on unsolicited, automated telephone calls to the home and to facilitate interstate commerce by restricting certain uses of facsimile . . . machines and automatic dialers.”  (Sen.Rep. No. 102-178, 1st Sess., p. 1, reprinted in 1991 U.S. Code Cong. & Admin. News, p. 1968; see 47 U.S.C. § 227.)  “The TCPA instructs the Federal Communications Commission to issue regulations ‘concerning the need to protect residential telephone subscribers’ privacy rights to avoid receiving telephone solicitations to which they object.”  (Charvat v. NMP, LLC (6th Cir. 2011) 656 F.3d 440, quoting 47 U.S.C. § 227(c)(1).)  “In 2003, two federal agencies — the Federal Trade Commission (FTC) and the Federal Communications Commission (FCC) — promulgated rules that together created the national do-not-call registry.  [Citations.]  The national do-not-call registry is a list containing the personal telephone numbers of telephone subscribers who have voluntarily indicated that they do not wish to receive unsolicited calls from commercial telemarketers.  Commercial telemarketers are generally prohibited from calling phone numbers that have been placed on the do-not-call registry, and they must pay an annual fee to access the numbers on the registry so that they can delete those numbers from their telephone solicitation lists.”  (Mainstream Mktg. Servs. v. FTC (10th Cir. 2004) 358 F.3d 1228, 1233–1234, fns. omitted.)  Thus, federal legislation limits the commercial use of customer telephone numbers (Cal. S. Ct., S199384, 04.02.2013, Apple v. Super. Ct.).


Internet : protection des données et Internet en droit californien : la loi de 2003 à ce sujet dispose que l'opérateur d'un site Internet commercial qui recueille des informations personnelles identifiables par le biais d'Internet au sujet de consommateurs individuels domiciliés en Californie qui utilisent ou visitent son site Internet commercial doit indiquer de manière apparente sa politique en matière de protection des données sur dit site Internet. Dite politique en matière de protection des données doit : (1) identifier les catégories d'informations personnelles que l'opérateur recueille par son site Internet au sujet des consommateurs individuels et indiquer les catégories d'entités tierces avec lesquelles ces informations sont partagées; (2) indiquer si l'opérateur maintient une procédure par laquelle le consommateur peut vérifier ses données personnelles précitées et en demander la modification; si une telle procédure est en place, sa mise en œuvre doit être expliquée; (3) décrire la procédure par laquelle l'opérateur annonce au consommateur une modification de sa politique en matière de protection des données; (4) divulguer les dates auxquelles dite politique est en vigueur.
Le droit fédéral apporte également un certain degré de protection contre l'usage de données personnelles à des fins de sollicitations commerciales indésirables. Une loi de 1991 protège contre des appels téléphoniques non sollicités et automatiques. Un registre "do-not-call" fut établi pour protéger le consommateur contre le télémarketing.