Mark Zuckerberg Apologizes For the Cambridge Analytica Scandal, Says He Isn't Opposed To Regulation (theverge.com) 42

An anonymous reader quotes a report from The Verge: Mark Zuckerberg apologized on Wednesday evening for his company's handling of the Cambridge Analytica privacy scandal. "This was a major breach of trust and I'm really sorry this happened," he said in an interview on CNN. "Our responsibility now is to make sure this doesn't happen again." Zuckerberg's comments reflected the first time he apologized following an uproar over how Facebook allowed third-party developers to access user data. Earlier in the day, Zuckerberg wrote a Facebook post in which he said the company had made mistakes in its handling of the Cambridge Analytica data revelations. The company laid out a multipart plan designed to reduce the amount of data shared by users with outside developers, and said it would audit some developers who had access to large troves of data before earlier restrictions were implemented in 2014. Zuckerberg also told CNN that he is not totally opposed to regulation. "I'm not sure we shouldn't be regulated," he said. "There are things like ad transparency regulation that I would love to see."

Other highlights of Zuckerberg's interviews:
-He told multiple outlets that he would be willing to testify before Congress.
-He said the company would notify everyone whose data was improperly used.
-He told the New York Times that Facebook would double its security force this year, adding: "We'll have more than 20,000 people working on security and community operations by the end of the year, I think we have about 15,000 now."
-He told the Times that Facebook would investigate "thousands" of apps to determine whether they had abused their access to user data.

Regarding moderation, Zuckerberg told Recode: "[The] thing is like, 'Where's the line on hate speech?' I mean, who chose me to be the person that did that?" Zuckerberg said. "I guess I have to, because of where we are now, but I'd rather not."

Ask Slashdot: Were Developments In Technology More Exciting 30 Years Ago? 180

dryriver writes: We live in a time where mainstream media, websites, blogs, social media accounts, your barely computer literate next door neighbor and so forth frequently rave about the "innovation" that is happening everywhere. But as someone who experienced developments in technology back in the 1980s and 1990s, in computing in particular, I cannot shake the feeling that, somehow, the "deep nerds" who were innovating back then did it better and with more heartfelt passion than I can feel today. Of course, tech from 30 years ago seems a bit primitive compared to today -- computer gear is faster and sleeker nowadays. But it seems that the core techniques and core concepts used in much of what is called "innovation" today were invented for the first time one-after-the-other back then, and going back as far as the 1950s maybe. I get the impression that much of what makes billions in profits today and wows everyone is mere improvements on what was actually invented and trail blazed for the first time, 2, 3, 4, 5 or more decades ago. Is there much genuine "inventing" and "innovating" going on today, or are tech companies essentially repackaging the R&D and knowhow that was brought into the world decades ago by long-forgotten deep nerds into sleeker, sexier 21st century tech gadgets? Is Alexa, Siri, the Xbox, Oculus Rift or iPhone truly what could be considered "amazing technology," or should we have bigger and badder tech and innovation in the year 2018?

Mark Zuckerberg Addresses the Cambridge Analytica Scandal, Says Facebook 'Made Mistakes' in Protecting Data (buzzfeed.com) 117

Facebook CEO Mark Zuckerberg on Wednesday commented on the massive, deepening data harvesting scandal his company has been embroiled in since last Friday. From a report: "We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again," he said. The scandal -- involving the illicit collection of data from 50 million Facebook users, and its later use by Trump campaign analytics vendor Cambridge Analytica -- has helped chop off nearly $50 billion in value from Facebook's market cap since last Friday, led to calls from US lawmakers for Zuckerberg testify before congress, and raised eyebrows at the U.S. Federal Trade Commission, which is now probing the company. Speaking of things Facebook plans to do to ensure that this mess doesn't repeat itself, Zuckerberg added, "First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.

"Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months. We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days."

There is no explicit apology in Zuckerberg's comment today.

Mozilla Launches a Petition Asking Facebook To Do More For User Privacy (betanews.com) 52

An anonymous reader shares a report: After it was revealed that the personal data of 50 million Facebook users was shared without consent, Mozilla is calling on the social network to ensure that user privacy is protected by default, particularly when it comes to apps.

Ashley Boyd, Mozilla's vice president of advocacy, says that billions of Facebook users are unknowingly at risk of having their data passed on to third parties. He says: "If you play games, read news or take quizzes on Facebook, chances are you are doing those activities through third-party apps and not through Facebook itself. The default permissions that Facebook gives to those third parties currently include data from your education and work, current city and posts on your timeline."


WhatsApp Co-Founder Tells Everyone To Delete Facebook, Further Fueling the #DeleteFacebook Movement (theverge.com) 287

"In 2014, Facebook bought WhatsApp for $16 billion, making its co-founders -- Jan Koum and Brian Acton -- very wealthy men," reports The Verge. "Koum continues to lead the company, but Acton quit earlier this year to start his own foundation." Today, Acton told his followers on Twitter to delete Facebook. From the report: "It is time," Acton wrote, adding the hashtag #deletefacebook. Acton, who is worth $6.5 billion, did not immediately respond to a request for comment. Nor did Facebook and WhatsApp. It was unclear whether Acton's feelings about Facebook extend to his own app. But last month, Acton invested $50 million into Signal, an independent alternative to WhatsApp. The tweet came after a bruising five-day period for Facebook that has seen regulators swarm and its stock price plunge following concerns over data privacy in the wake of revelations about Cambridge Analytica's misuse of user data. Acton isn't the only one taking to Twitter to announce their breakup with Facebook. The #DeleteFacebook movement is gaining steam following the New York Times' report about how the data of 50 million users had been unknowingly leaked and purchased to aid President Trump's successful 2016 bid for the presidency. For many users, the news "highlighted the danger of Facebook housing the personal information of billions of users," reports SFGate. "And even before the Cambridge Analytica news, Facebook has been grappling with its waning popularity in the U.S. The company lost 1 million domestic users last quarter -- its first quarterly drop in daily users."

Mark Zuckerberg AWOL From Facebook's Data Leak Damage Control Session (thedailybeast.com) 164

An anonymous reader writes: It's not just that he's silent in public. Facebook CEO and co-founder Mark Zuckerberg declined to face his employees on Tuesday to explain the company's role in a widening international scandal over the 2016 election. Facebook employees on Tuesday got the opportunity for an internal briefing and question-and-answer session about Facebook's role with the Trump-aligned data firm Cambridge Analytica. It was the first the company held to brief and reassure employees after, ahead of damaging news reports, Facebook abruptly suspended Cambridge Analytica. But Zuckerberg himself wasn't there, The Daily Beast has learned. Instead, the session was conducted by a Facebook attorney, Paul Grewal, according to a source familiar with the meeting. That was the same approach the company used on Capitol Hill this past fall, when it sent its top attorney, Colin Stretch, to brief Congress about the prevalence of Russian propaganda, to include paid ads and inauthentic accounts, on its platform. Further reading: Where in the world is Mark Zuckerberg? Frustrated Facebook execs are asking.

FTC Probing Facebook For Use of Personal Data: Bloomberg (bloomberg.com) 78

An anonymous reader shares a report: Facebook is under investigation by a U.S. privacy watchdog over the use of personal data of 50 million users by a data analytics firm to help elect President Donald Trump. The U.S. Federal Trade Commission is probing whether Facebook violated terms of a 2011 consent decree of its handing of user data that was transferred to Cambridge Analytica without their knowledge, according to a person familiar with the matter. Under the 2011 settlement, Facebook agreed to get user consent for certain changes to privacy settings as part of a settlement of federal charges that it deceived consumers and forced them to share more personal information than they intended. That complaint arose after the company changed some user settings without notifying its customers, according to an FTC statement at the time. If the FTC finds Facebook violated terms of the consent decree, it has the power to fine the company thousands of dollars a day per violation.

China Approves Giant Propaganda Machine To Improve Global Image (bloomberg.com) 154

China has approved the creation of one of the world's largest propaganda machines as it looks to improve its global image, Bloomberg reported on Tuesday, citing a person familiar with the matter. From the report: The new broadcaster will be called "Voice of China," the person said, mimicking the U.S. government-funded Voice of America that started up during World War II to advance American interests. Bloomberg News had previously reported the new entity would be created through merging China Central Television, China Radio International and China National Radio. The combined group was designed to strengthen the party's ability to shape public opinion and would serve as a key vehicle for China to project its image to the world.

Facebook Security Chief Said To Leave After Clashes Over Disinformation (theverge.com) 45

Facebook's chief information security officer, Alex Stamos, will leave the company after internal disagreements over how the social network should deal with its role in spreading disinformation. The New York Times reports (Warning: source may be paywalled; alternative source): Mr. Stamos had been a strong advocate inside the company for investigating and disclosing Russian activity on Facebook, often to the consternation of other top executives, including Sheryl Sandberg, the social network's chief operating officer, according to the current and former employees, who asked not to be identified discussing internal matters. After his day-to-day responsibilities were reassigned to others in December, Mr. Stamos said he would leave the company. He was persuaded to stay through August to oversee the transition of his duties because executives thought his departure would look bad, the current and former employees said. He has been overseeing the transfer of his security team to Facebook's product and infrastructure divisions. His group, which once had 120 people, now has three, the current and former employees said. Mr. Stamos would be the first high-ranking employee to leave Facebook since controversy erupted over disinformation on its site. His departure is a sign of heightened leadership tensions at the company.

Facebook Hires Firm To Conduct Forensic Audit of Cambridge Analytica Data (cbsnews.com) 135

After it was revealed that political data analytics firm, Cambridge Analytica, harvested personal data from more than 50 million Facebook users, the social media company has been scrutinized for not better protecting its users. Today, CBS News reports that Facebook has recently hired Stroz Friedberg, a digital forensics firm, to conduct an audit of Cambridge Analytica. According to a press release issued by Facebook on Monday, Cambridge Analytica has agreed to "comply and afford the firm complete access to their servers and systems." From the report: The social network said it asked Christopher Wylie and University of Cambridge professor Aleksandr Kogan to submit to an audit. Facebook says Kogan has verbally agreed to participate, but Wylie has declined. Wylie is a former employee of Cambridge Analytica who described the company's use of illicit data in interviews late last week. Cambridge Analytica, Kogan and Wylie were banned from Facebook on Friday. Cambridge Analytica did not immediately confirm that it had agreed to comply with the audit. The firm has denied the allegations that it improperly collected and used the data. A spokeswoman for Stroz Friedberg declined to comment on the firm's involvement with an audit.

"We are moving aggressively to determine the accuracy of these claims," Facebook officials said in a statement. "We remain committed to vigorously enforcing our policies to protect people's information. We also want to be clear that today when developers create apps that ask for certain information from people, we conduct a robust review to identify potential policy violations and to assess whether the app has a legitimate use for the data. We actually reject a significant number of apps through this process. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists. If this data still exists, it would be a grave violation of Facebook's policies and an unacceptable violation of trust and the commitments these groups made."


Hackers Are So Fed Up With Twitter Bots They're Hunting Them Down Themselves (theintercept.com) 45

An anonymous reader writes: Even if Twitter hasn't invested much in anti-bot software, some of its most technically proficient users have. They're writing and refining code that can use Twitter's public application programming interface, or API, as well as Google and other online interfaces, to ferret out fake accounts and bad actors. The effort, at least among the researchers I spoke with, has begun with hunting bots designed to promote pornographic material -- a type of fake account that is particularly easy to spot -- but the plan is to eventually broaden the hunt to other types of bots. The bot-hunting programming and research has been a strictly volunteer, part-time endeavor, but the efforts have collectively identified tens of thousands of fake accounts, underlining just how much low-hanging fruit remains for Twitter to prune.

Among the part-time bot-hunters is French security researcher and freelance Android developer Baptiste Robert, who in February of this year noticed that Twitter accounts with profile photos of scantily clad women were liking his tweets or following him on Twitter. Aside from the sexually suggestive images, the bots had similarities. Not only did these Twitter accounts typically include profile photos of adult actresses, but they also had similar bios, followed similar accounts, liked more tweets than they retweeted, had fewer than 1,000 followers, and directed readers to click the link in their bios.

The Internet

Say Goodbye To the Information Age: It's All About Reputation Now (aeon.co) 193

An anonymous reader shares an essay on Aeon magazine by Gloria Origgi, an Italian philosopher and a tenured senior researcher at CNRS : We are experiencing a fundamental paradigm shift in our relationship to knowledge. From the 'information age', we are moving towards the 'reputation age', in which information will have value only if it is already filtered, evaluated and commented upon by others. Seen in this light, reputation has become a central pillar of collective intelligence today. It is the gatekeeper to knowledge, and the keys to the gate are held by others. The way in which the authority of knowledge is now constructed makes us reliant on what are the inevitably biased judgments of other people, most of whom we do not know.

[...] The paradigm shift from the age of information to the age of reputation must be taken into account when we try to defend ourselves from 'fake news' and other misinformation and disinformation techniques that are proliferating through contemporary societies. What a mature citizen of the digital age should be competent at is not spotting and confirming the veracity of the news. Rather, she should be competent at reconstructing the reputational path of the piece of information in question, evaluating the intentions of those who circulated it, and figuring out the agendas of those authorities that leant it credibility.


Ghana's Windows Blackboard Teacher And His Students Have a Rewarding Outcome (qz.com) 81

Quartz: A lot has changed in the life of Richard Appiah Akoto in the fortnight since he posted photos of himself on Facebook drawing a Microsoft Word processing window on a blackboard with multi-colored chalk, to teach his students about computers -- which the school did not have. The photos went viral on social media and media stories like Quartz's went all around the world. Akoto, 33, is the information and communication technology (ICT) teacher at Betenase M/A Junior High School in the town of Sekyedomase, about two and half hours drive north of Ghana's second city, Kumasi. The school had no computers even though since 2011, 14 and 15-year-olds in Ghana are expected to write and pass a national exam (without which students cannot progress to high school) with ICT being one of the subjects.

The story of the school and Twitter pressure from prominent players in the African tech space drew a promise from Microsoft to "equip [Akoto] with a device from one of our partners, and access to our MCE program & free professional development resources on." To fulfill this promise, the technology giant flew Akoto to Singapore this week where he is participating in the annual Microsoft Education Exchange.


Facebook and Its Executives Are Getting Destroyed After Botching the Handling of a Massive Data Breach (businessinsider.com) 187

The way Facebook has disclosed the abuse of its system by Cambridge Analytica, which has been reported this week, speaks volumes of Facebook's core beliefs. Sample this except from Business Insider: Facebook executives waded into a firestorm of criticism on Saturday, after news reports revealed that a data firm with ties to the Trump campaign harvested private information from millions of Facebook users. Several executives took to Twitter to insist that the data leak was not technically a "breach." But critics were outraged by the response and accused the company of playing semantics and missing the point. Washington Post reporter Hamza Shaban: Facebook insists that the Cambridge Analytica debacle wasn't a data breach, but a "violation" by a third party app that abused user data. This offloading of responsibility says a lot about Facebook's approach to our privacy. Observer reporter Carole Cadwalladr, who broke the news about Cambridge Analytica: Yesterday Facebook threatened to sue us. Today we publish this. Meet the whistleblower blowing the lid off Facebook and Cambridge Analytica. [...] Facebook's chief strategy officer wading in. So, tell us @alexstamos (who expressed his displeasure with the use of "breach" in media reports) why didn't you inform users of this "non-breach" after The Guardian first reported the story in December 2015? Zeynep Tufekci: If your business is building a massive surveillance machinery, the data will eventually be used and misused. Hacked, breached, leaked, pilfered, conned, "targeted", "engaged", "profiled", sold.. There is no informed consent because it's not possible to reasonably inform or consent. [...] Facebook's defense that Cambridge Analytica harvesting of FB user data from millions is not technically a "breach" is a more profound and damning statement of what's wrong with Facebook's business model than a "breach." MIT Professor Dean Eckles: Definitely fascinating that Joseph Chancellor, who contributed to collection and contract-violating retention (?) of Facebook user data, now works for Facebook. Amir Efrati, a reporter at the Information: May seem like a small thing to non-reporters but Facebook loses credibility by issuing a Friday night press release to "front-run" publications that were set to publish negative articles about its platform. If you want us to become more suspicious, mission accomplished. Further reading: Facebook's latest privacy debacle stirs up more regulatory interest from lawmakers (TechCrunch).

Are Google and Facebook Surveilling Their Own Employees? (theguardian.com) 106

The Guardian just ran an article titled " 'They'll squash you like a bug': how Silicon Valley keeps a lid on leakers," which begins with the story of an employee confronted by Facebook's secretive "rat-catching" team: They had records of a screenshot he'd taken, links he had clicked or hovered over, and they strongly indicated they had accessed chats between him and the journalist, dating back to before he joined the company. "It's horrifying how much they know," he told the Guardian, on the condition of anonymity... "You get on their bad side and all of a sudden you are face to face with Mark Zuckerberg's secret police"... One European Facebook content moderator signed a contract, seen by the Guardian, which granted the company the right to monitor and record his social media activities, including his personal Facebook account, as well as emails, phone calls and internet use. He also agreed to random personal searches of his belongings including bags, briefcases and car while on company premises. Refusal to allow such searches would be treated as gross misconduct...

Some employees switch their phones off or hide them out of fear that their location is being tracked. One current Facebook employee who recently spoke to Wired asked the reporter to turn off his phone so the company would have a harder time tracking if it had been near the phones of anyone from Facebook. Two security researchers confirmed that this would be technically simple for Facebook to do if both people had the Facebook app on their phone and location services switched on. Even if location services aren't switched on, Facebook can infer someone's location from wifi access points.

The article cites a 2012 report that Microsoft read a French blogger's Hotmail account to identify a former employee who had leaked trade secrets. And it also reports that tech companies hire external agencies to surveil their employees. "One such firm, Pinkerton, counts Google and Facebook among its clients." Though Facebook and Google both deny this, "Among other services, Pinkerton offers to send investigators to coffee shops or restaurants near a company's campus to eavesdrop on employees' conversations...

Al Gidari, consulting director of privacy at the Stanford Center for Internet and Society, says that these tools "are common, widespread, intrusive and legal."
Open Source

How An Open Source Plugin Tamed a Chaotic Comments Section With A Simple Quiz (arstechnica.com) 154

Long-time Slashdot reader jebrick quotes an article from Ars Technica about how Norway's government-owned public broadcasting company "employs open source tactics to fight trolling": The five-person team behind a simple WordPress plugin, which took three hours to code, never expected to receive worldwide attention as a result. But NRKbeta, the tech-testing group at Norway's largest national media organization, tapped into a meaty vein with the unveiling of last February's Know2Comment, an open source plugin that can attach to any WordPress site's comment section. "It was a basic idea," NRKbeta developer Stale Grut told a South By Southwest crowd on Tuesday. "Readers had to prove they read a story before they were able to comment on it"... He and fellow staffers spent three hours building the plugin, which Grut reminded the crowd is wholly open source... "[W]e realized not every article is in need of this. We are a tech site; we don't have a lot of controversy, so there's not a big need for it. We use it now on stories where we anticipate there'll be uninformed debate to add this speed bump."
What do you think? And would a quiz-for-commenting-privileges be a good addition to Slashdot?

Facebook Suspends Donald Trump's Data Operations Team For Misusing People's Personal Information (theverge.com) 195

An anonymous reader quotes a report from The Verge: Facebook said late Friday that it had suspended Strategic Communication Laboratories (SCL), along with its political data analytics firm, Cambridge Analytica, for violating its policies around data collection and retention. The companies, which ran data operations for Donald Trump's 2016 presidential election campaign, are widely credited with helping Trump more effectively target voters on Facebook than his rival, Hillary Clinton. While the exact nature of their role remains somewhat mysterious, Facebook's disclosure suggests that the company improperly obtained user data that could have given it an unfair advantage in reaching voters. Facebook said it cannot determine whether or how the data in question could have been used in conjunction with election ad campaigns.

In a blog post, Facebook deputy general counsel Paul Grewal laid out how SCL came into possession of the user data. In 2015, Aleksandr Kogan, a psychology professor at the University of Cambridge, created an app named "thisisyourdigitallife" that promised to predict aspects of users' personalities. About 270,000 people downloaded it and logged in through Facebook, giving Kogan access to information about their city of residence, Facebook content they had liked, and information about their friends. Kogan passed the data to SCL and a man named Christopher Wylie from a data harvesting firm known as Eunoia Technologies, in violation of Facebook rules that prevent app developers from giving away or selling users' personal information. Facebook learned of the violation that year and removed his app from Facebook. It also asked Kogan and his associates to certify that they had destroyed the improperly collected data. Everyone said that they did. The suspension is not permanent, a Facebook spokesman said. But the suspended users would need to take unspecified steps to certify that they would comply with Facebook's terms of service.


Facebook Says It is Sorry For Suggesting Child Sex Videos in Search (cnet.com) 47

Facebook issued an apology on Friday after offensive terms appeared in the social network's search predictions late Thursday. From a report: When users typed "videos of" into the search bar, Facebook prompted them to search phrases including "videos of sexuals," "videos of girl sucking dick under water" and, perhaps most disturbingly, "video of little girl giving oral." Shocked users reported the problem on Twitter, posting screenshots of the search terms, which also included multiple suggestions relating to the school shooting in Florida last month. The social network appeared to have fixed the problem by Friday morning.

China To Bar People With Bad 'Social Credit' From Planes, Trains (reuters.com) 170

China says it will begin applying its so-called social credit system to flights and trains and stop people who have committed misdeeds from taking such transport for up to a year. From a report: People who would be put on the restricted lists included those found to have committed acts like spreading false information about terrorism and causing trouble on flights, as well as those who used expired tickets or smoked on trains, according to two statements issued on the National Development and Reform Commission's website on Friday. Those found to have committed financial wrongdoings, such as employers who failed to pay social insurance or people who have failed to pay fines, would also face these restrictions, said the statements which were dated March 2. It added that the rules would come into effect on May 1.

EU Wants To Require Platforms To Filter Uploaded Content (Including Code) (github.com) 110

A new copyright proposal in the EU would require code-sharing platforms like GitHub and SourceForge to monitor all content that users upload for potential copyright infringement. "The proposal is aimed at music and videos on streaming platforms, based on a theory of a 'value gap' between the profits those platforms make from uploaded works and what copyright holders of some uploaded works receive," reports The GitHub Blog. "However, the way it's written captures many other types of content, including code."

Upload filters, also known as "censorship machines," are some of the most controversial elements of the copyright proposal, raising a number of concerns including: -Privacy: Upload filters are a form of surveillance, effectively a "general monitoring obligation" prohibited by EU law
-Free speech: Requiring platforms to monitor content contradicts intermediary liability protections in EU law and creates incentives to remove content
-Ineffectiveness: Content detection tools are flawed (generate false positives, don't fit all kinds of content) and overly burdensome, especially for small and medium-sized businesses that might not be able to afford them or the resulting litigation
Upload filters are especially concerning for software developers given that: -Software developers create copyrightable works -- their code -- and those who choose an open source license want to allow that code to be shared
-False positives (and negatives) are especially likely for software code because code often has many contributors and layers, often with different licensing for different components
-Requiring code-hosting platforms to scan and automatically remove content could drastically impact software developers when their dependencies are removed due to false positives
The EU Parliament continues to introduce new proposals for Article 13 but these issues remain. MEP Julia Reda explains further in a recent proposal from Parliament.

Slashdot Top Deals