Facebook to pay record US$5bn data privacy penalty
Facebook has agreed to pay a record-breaking US$5 billion ($7.17 billion) penalty to settle charges that the company deceived users over the ability to control the privacy of their personal information.
The settlement with the US Department of Justice and the Fair Trade Commission will be the largest fine ever imposed globally violating consumers’ privacy, and 20 times larger than the largest data security penalty imposed worldwide.
Under the settlement Facebook has also committed to implement a range of reforms to its operational and management structure.
These include establishing an independent privacy committee within Facebook’s board of directors, appointing dedicated compliance officers for the company’s privacy program and submitting quarterly certifications that the company is in compliance with the privacy requirements of the agreement.
Facebook must also conduct a privacy review of every new or modified product, service or practice for Facebook, WhatsApp and Instagram before it is implemented.
Following a joint FTC and Department of Justice investigation into Facebook, the regulators have alleged that Facebook repeatedly used deceptive disclosures and settings to undermine users’ privacy preferences.
These tactics allowed the company to share users’ personal information with third-party apps that were downloaded by the user’s Facebook friends, as was the case in last year’s high-profile Cambridge Analytica data harvesting scandal.
This put Facebook in violation of a 2012 FTC order prohibiting the company from making misrepresentations about the privacy or security of consumers’ personal information, and the extent to which it shares personal information, such as names and dates of birth, with third parties.
For example, despite introducing multiple services to ostensibly give users control over their private data to comply with the order, Facebook allegedly failed to disclose to users that even under the most restrictive privacy settings Facebook was entitled to share user information with apps of a user’s Facebook friends unless they specifically opted out.
Despite publicly announcing that it will stop allowing third-party developers to collect data about the friends of app users, the FTC has alleged that Facebook waited until at least June 2018 — after the Cambridge Analytica scandal broke — to stop this practice.
Facebook has also been accused of improperly policing app developers on its platform by screening developers or the apps before granting them access to vast amounts of data, misrepresenting users’ ability to control the use of face recognition technology with their accounts, and failing to disclose that it was collecting users’ phone numbers for advertising purposes as well as to enable two-factor authentication (2FA).
Under the order, Facebook will be required to exercise greater oversight over third-party apps, refrain from using phone numbers obtained for 2FA for advertising, encrypt user passwords and regularly scan to detect any passwords stored in plaintext on their servers, and implement other related reforms.
Please follow us and share on Twitter and Facebook. You can also subscribe for FREE to our weekly newsletter and quarterly magazine.
The AI regulation debate in Australia: navigating risks and rewards
To remain competitive in the world economy, Australia needs to find a way to safely use AI systems.
Strategies for navigating Java vulnerabilities
Java remains a robust and widely adopted platform for enterprise applications, but staying ahead...
Not all cyber risk is created equal
The key to mitigating cyber exposure lies in preventing breaches before they happen.