×
AI

New Internal Documents Contradict Facebook's Claims that AI Can Enforce Its Rules (livemint.com) 34

Today in the Wall Street Journal, Facebook's head of integrity, Guy Rosen, admitted that from April to June of this year, one in every 2,000 content views on Facebook still contained hate speech. (Alternate URL here, with shorter versions here and here.)

Head of integrity Rosen was calling that figure an improvement over mid-2020, when one in every 1,000 content views on Facebook were hate speech. But at that same moment in time Mark Zuckerberg was telling the U.S. Congress that "In terms of fighting hate, we've built really sophisticated systems!" "Facebook Inc. executives have long said that artificial intelligence would address the company's chronic problems keeping what it deems hate speech and excessive violence as well as underage users off its platforms," reports the Wall Street Journal.

"That future is farther away than those executives suggest, according to internal documents reviewed by The Wall Street Journal. Facebook's AI can't consistently identify first-person shooting videos, racist rants and even, in one notable episode that puzzled internal researchers for weeks, the difference between cockfighting and car crashes." On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules — a low-single-digit percent, they say. When Facebook's algorithms aren't certain enough that content violates the rules to delete it, the platform shows that material to users less often — but the accounts that posted the material go unpunished.

The employees were analyzing Facebook's success at enforcing its own rules on content that it spells out in detail internally and in public documents like its community standards. The documents reviewed by the Journal also show that Facebook two years ago cut the time human reviewers focused on hate-speech complaints from users and made other tweaks that reduced the overall number of complaints. That made the company more dependent on AI enforcement of its rules and inflated the apparent success of the technology in its public statistics.

According to the documents, those responsible for keeping the platform free from content Facebook deems offensive or dangerous acknowledge that the company is nowhere close to being able to reliably screen it. "The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas," wrote a senior engineer and research scientist in a mid-2019 note. He estimated the company's automated systems removed posts that generated just 2% of the views of hate speech on the platform that violated its rules. "Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term," he wrote.

This March, another team of Facebook employees drew a similar conclusion, estimating that those systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook's policies against violence and incitement.

Facebook does also take some other additional steps to reduce views of hate speech (beyond AI screening), they told the Journal — also arguing that the internal Facebook documents the Journal had reviwed were outdated. But one of those documents showed that in 2019 Facebook was spending $104 million a year to review suspected hate speech, with a Facebook manager noting that "adds up to real money" and proposing "hate speech cost controls."

Facebook told the Journal the saved money went to better improving their algorithms. But the Journal reports that Facebook "also introduced 'friction' to the content reporting process, adding hoops for aggrieved users to jump through that sharply reduced how many complaints about content were made, according to the documents."

Facebook told the Journal that "some" of that friction has since been rolled back.
Facebook

Researchers Show Facebook's Ad Tools Can Target a Single User (techcrunch.com) 21

A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it's possible to use Facebook's targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook's platform assigns them. TechCrunch reports: The paper -- entitled "Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data" -- describes a "data-driven model" that defines a metric showing the probability a Facebook user can be uniquely identified based on interests attached to them by the ad platform. The researchers demonstrate that they were able to use Facebook's Custom Audience tool to target a number of ads in such a way that each ad only reached a single, intended Facebook user.

The research raises fresh questions about potentially harmful uses of Facebook's ad targeting tools, and -- more broadly -- questions about the legality of the tech giant's personal data processing empire given that the information it collects on people can be used to uniquely identify individuals, picking them out of the crowd of others on its platform even purely based on their interests. The findings could increase pressure on lawmakers to ban or phase out behavioral advertising -- which has been under attack for years, over concerns it poses a smorgasbord of individual and societal harms. And, at the least, the paper seems likely to drive calls for robust checks and balances on how such invasive tools can be used. The findings also underscore the importance of independent research being able to interrogate algorithmic adtech -- and should increase pressure on platforms not to close down researchers' access.

Bitcoin

SEC Said To Allow Bitcoin Futures ETFs As Deadline Looms (bloomberg.com) 27

The Securities and Exchange Commission is poised to allow the first U.S. Bitcoin futures exchange-traded fund to begin trading in a watershed moment for the cryptocurrency industry, according to people familiar with the matter. Bloomberg reports: The regulator isn't likely to block the products from starting to trade next week, said the people, who asked not to be named while discussing the decision. Unlike Bitcoin ETF applications that the regulator has previously rejected, the proposals by ProShares and Invesco Ltd. are based on futures contracts and were filed under mutual fund rules that SEC Chairman Gary Gensler has said provide "significant investor protections." Barring a last-minute reversal, the fund launch will be the culmination of a nearly decade-long campaign by the $6.7 trillion ETF industry. Advocates have sought approval as a confirmation of mainstream acceptance of cryptocurrencies since Cameron and Tyler Winklevoss, the twins best known for their part in the history of Facebook Inc., filed the first application for a Bitcoin ETF in 2013.

Approval has for years been out of the grasp of issuers who, amid myriad false signs of progress and outright rejections, have tried to get a variety of different structures cleared for trading. Over the years, there have been plans for funds that proposed to hold Bitcoin via a digital vault or that could use leverage to juice returns. Others sought to mitigate Bitcoin's famous volatility, a key point of contention for the SEC. [...] Four futures-backed Bitcoin ETFs could begin trading on U.S. exchanges this month, with deadlines for applications from VanEck and Valkyrie also approaching. Meanwhile, dozens of cryptocurrency exchange-traded products have launched in Canada and across Europe.

Facebook

One of Facebook's Earliest Investors Says People Have Lost Trust in Company (bloomberg.com) 55

Facebook has lost people's trust "for good reasons" and isn't responding well to whistle-blower claims that the social-media giant prioritizes profit over user safety, according to one of its earliest investors, Reid Hoffman. Bloomberg: "I'm disappointed," Hoffman said Wednesday in an interview. Facebook should have been more proactive in response to troubling signs revealed in its own research, he said. "Good for Facebook for doing the research," Hoffman said. "You discovered some things that are harmful -- what are you doing about it?" [...] Hoffman, who is a partner at venture capital firm Greylock Partners and a co-founder of LinkedIn, said he hasn't yet spoken to Zuckerberg but has offered his help with the crisis that the company is facing. To regain trust, Facebook has to be "extra transparent," he said. "They have to come forward and say, 'Look, here's our dashboards, here's our metrics, here is the ways we are trying to work on this and do things."
Businesses

Amazon, Facebook Among Companies Facing FTC Warning Over Reviews (bloomberglaw.com) 14

An anonymous reader quotes a report from Bloomberg Law: Companies including Amazon and Facebook could face fines over fake reviews or other misleading endorsements online, according to a warning from the Federal Trade Commission. The warning comes as social media has blurred the line between authentic content and advertising, according to the FTC's Wednesday announcement. Practices such as influencer marketing leave some consumers confused about when posters are paid to endorse a product, if their connection to the brand isn't clearly disclosed.

The agency sent more than 700 companies a notice that they could incur penalties of up to $43,792 per violation if they use endorsements in ways that run counter to past FTC enforcement cases. The notices demonstrate FTC chair Lina Khan's efforts to ramp up enforcement under the commission's existing authorities, following a recent U.S. Supreme Court ruling that limited the agency's ability to seek monetary awards in court. The commission's move on endorsements relies on an agency authority that allows for civil penalties against a company that engages in conduct that it knows has been found unlawful in a previous FTC administrative order, other than a consent order.

Facebook

Groups Launch 'How To Stop Facebook' Effort (axios.com) 52

A coalition of nonprofits on Wednesday debuted HowToStopFacebook.org, a fresh push to encourage greater government regulation of the social networking giant aimed at forcing the company to change its business model. From a report: The campaign hopes to take the outrage expressed by legislators over the revelations of whistleblower Frances Haugen and translate it into action. The campaign is pushing for two goals: A Congressional investigation with subpoena power into harms caused by Facebook; and a strong federal data privacy law that makes it illegal for companies like Facebook and YouTube to collect the vast amounts of data they use to personalize recommendations. The more than 30 groups involved include Accountable Tech, Article 19, Center for Digital Democracy, Fairplay, Global Voices, Media Justice, National Hispanic Media Coalition, Presente, Public Knowledge, United We Dream, Ranking Digital Rights, SumOfUs, Win Without War, and the Sex Workers Project of the Urban Justice Center.
Android

Study Reveals Android Phones Constantly Snoop On Their Users (bleepingcomputer.com) 112

A new study (PDF) by a team of university researchers in the UK has unveiled a host of privacy issues that arise from using Android smartphones. BleepingComputer reports: The researchers have focused on Samsung, Xiaomi, Realme, and Huawei Android devices, and LineageOS and /e/OS, two forks of Android that aim to offer long-term support and a de-Googled experience. The conclusion of the study is worrying for the vast majority of Android users: "With the notable exception of /e/OS, even when minimally configured and the handset is idle these vendor-customized Android variants transmit substantial amounts of information to the OS developer and also to third parties (Google, Microsoft, LinkedIn, Facebook, etc.) that have pre-installed system apps." As the summary table indicates, sensitive user data like persistent identifiers, app usage details, and telemetry information are not only shared with the device vendors, but also go to various third parties, such as Microsoft, LinkedIn, and Facebook. And to make matters worse, Google appears at the receiving end of all collected data almost across the entire table.

It is important to note that this concerns the collection of data for which there's no option to opt-out, so Android users are powerless against this type of telemetry. This is particularly concerning when smartphone vendors include third-party apps that are silently collecting data even if they're not used by the device owner, and which cannot be uninstalled. For some of the built-in system apps like miui.analytics (Xiaomi), Heytap (Realme), and Hicloud (Huawei), the researchers found that the encrypted data can sometimes be decoded, putting the data at risk to man-in-the-middle (MitM) attacks. As the study points out, even if the user resets the advertising identifiers for their Google Account on Android, the data-collection system can trivially re-link the new ID back to the same device and append it to the original tracking history. The deanonymization of users takes place using various methods, such as looking at the SIM, IMEI, location data history, IP address, network SSID, or a combination of these.
In response to the report, a Google spokesperson said: "While we appreciate the work of the researchers, we disagree that this behavior is unexpected -- this is how modern smartphones work. As explained in our Google Play Services Help Center article, this data is essential for core device services such as push notifications and software updates across a diverse ecosystem of devices and software builds. For example, Google Play services uses data on certified Android devices to support core device features. Collection of limited basic information, such as a device's IMEI, is necessary to deliver critical updates reliably across Android devices and apps."
Facebook

The Intercept Reveals Facebook's Secret Blacklist of 'Dangerous Individuals and Organizations' (theintercept.com) 69

Sam Biddle writes via The Intercept: To ward off accusations that it helps terrorists spread propaganda, Facebook has for many years barred users from speaking freely about people and groups it says promote violence. The restrictions appear to trace back to 2012, when in the face of growing alarm in Congress and the United Nations (PDF) about online terrorist recruiting, Facebook added to its Community Standards a ban on "organizations with a record of terrorist or violent criminal activity." This modest rule has since ballooned into what's known as the Dangerous Individuals and Organizations policy, a sweeping set of restrictions on what Facebook's nearly 3 billion users can say about an enormous and ever-growing roster of entities deemed beyond the pale. [...] The Intercept has reviewed a snapshot of the full DIO list and is today publishing a reproduction of the material in its entirety, with only minor redactions and edits to improve clarity. It is also publishing an associated policy document, created to help moderators decide what posts to delete and what users to punish.

The list and associated rules appear to be a clear embodiment of American anxieties, political concerns, and foreign policy values since 9/11, experts said, even though the DIO policy is meant to protect all Facebook users and applies to those who reside outside of the United States (the vast majority). Nearly everyone and everything on the list is considered a foe or threat by America or its allies: Over half of it consists of alleged foreign terrorists, free discussion of which is subject to Facebook's harshest censorship. The DIO policy and blacklist also place far looser prohibitions on commentary about predominately white anti-government militias than on groups and individuals listed as terrorists, who are predominately Middle Eastern, South Asian, and Muslim, or those said to be part of violent criminal enterprises, who are predominantly Black and Latino, the experts said.

The materials show Facebook offers "an iron fist for some communities and more of a measured hand for others," said Angel Diaz, a lecturer at the UCLA School of Law who has researched and written on the impact of Facebook's moderation policies on marginalized communities. Facebook's policy director for counterterrorism and dangerous organizations, Brian Fishman, said in a written statement that the company keeps the list secret because "[t]his is an adversarial space, so we try to be as transparent as possible, while also prioritizing security, limiting legal risks and preventing opportunities for groups to get around our rules." He added, "We don't want terrorists, hate groups or criminal organizations on our platform, which is why we ban them and remove content that praises, represents or supports them. A team of more than 350 specialists at Facebook is focused on stopping these organizations and assessing emerging threats. We currently ban thousands of organizations, including over 250 white supremacist groups at the highest tiers of our policies, and we regularly update our policies and organizations who qualify to be banned."

Facebook

Facebook To Act on Illegal Sale of Amazon Rainforest (bbc.com) 22

Facebook says it will begin clamping down on the illegal sale of protected areas of the Amazon rainforest on its site. From a report: The social media giant changed its policy following a BBC investigation into the practice. The new measures will apply only to conservation areas and not to publicly owned forest. And the move will be limited to the Amazon, not other rainforests and wildlife habitats across the world. According to a recent study from the think tank Ipam (Instituto de Pesquisa Ambental da Amazonia), a third of all deforestation happens in publicly-owned forests in the Amazon. Facebook said it would not reveal how it planned to find the illegal ads but said it would "seek to identify and block new listings" in protected areas of the Amazon rainforest. In February, the BBC Our World documentary Selling the Amazon revealed that plots of rainforest as large as 1,000 football pitches were being listed on Facebook's classified ads service.
United Kingdom

Ex-minister Predicts 'Huge Battleground' Over UK's Plan To Set Internet Content Rules (techcrunch.com) 45

The former UK minster of state for what is now the digital and culture department, DCMS, has warned of the looming battle in parliament over the exact shape of incoming online safety legislation. From a report: In an interview with TechCrunch, Ed Vaizey -- a former Conservative Party MP, now Lord Vaizey of Didcot, who was head of the culture, comms and creative industries department, as it was then, between 2010 and 2016 -- predicted a huge tug-of-war to influence the scope of the Online Safety Bill, warning that parliamentarians everywhere will try to hang their own "hobby horse" on it. The risk of over regulation or creating a disproportionate burden for startups vs tech giants is also real, Vaizey suggested, setting out several areas that he said would require a cautious approach.

"In theory it's just going to be the big platforms that will be regulated," he said of the scope of the Internet Safety Bill, which was published in draft form back in May -- and which critics are warning will be catastrophic for free speech. "Some platforms that should be regulated could potentially not be be regulated. But you're right that people are concerned that, in effect, there's a paradox -- that it could help the Facebooks of this world because the regulatory hurdles that get going might be too big. And if anyone is capable of being regulated it's Facebook, as opposed to a startup. So I think that's something we have to be very careful of. "Secondly, although I support the principle of legal but harmful content being regulated I have no doubt at all that that is going to be the big battle in parliament. The balance between legal but harmful free speech is going to be a huge battleground. And it will be interesting to see in what form it survives. And thirdly -- I think, paradoxically -- everyone is going to try and hang their own particular hobby horse on this piece of legislation."

Facebook

Facebook's Success Was Built on Algorithms. Can They Also Fix It? (cnn.com) 70

Experts tell CNN that Facebook's algorithms could be improved. "It will, however, require something Facebook has so far appeared reluctant to offer (despite executive talking points): more transparency and control for users." Margaret Mitchell, who leads artificial intelligence ethics for AI model builder Hugging Face and formerly co-led Google's ethical AI team, thinks this could be done by allowing you to view details about why you're seeing what you're seeing on a social network, such as in response to the posts, ads, and other things you look at and interact with. "You can even imagine having some say in it. You might be able to select preferences for the kinds of things you want to be optimized for you," she said, such as how often you want to see content from your immediate family, high school friends, or baby pictures. All of those things may change over time. Why not let users control them? Transparency is key, she said, because it incentivizes good behavior from the social networks.

Another way social networks could be pushed in the direction of increased transparency is by increasing independent auditing of their algorithmic practices, according to Sasha Costanza-Chock, director of research and design at the Algorithmic Justice League. They envision this as including fully independent researchers, investigative journalists, or people inside regulatory bodies — not social media companies themselves, or companies they hire — who have the knowledge, skills, and legal authority to demand access to algorithmic systems in order to ensure laws aren't violated and best practices are followed.

James Mickens, a computer science professor at Harvard and co-director of the Berkman Klein Center's Institute for Rebooting Social Media, suggests looking to the ways elections can be audited without revealing private information about voters (such as who each person voted for) for insights about how algorithms may be audited and reformed. He thinks that could give some insights for building an audit system that would allow people outside of Facebook to provide oversight while protecting sensitive data. A big hurdle, experts say, to making meaningful improvements is social networks' current focus on the importance of engagement, or the amount of time users spend scrolling, clicking, and otherwise interacting with social media posts and ads... Changing this is tricky, experts said, though several agreed that it may involve considering the feelings users have when using social media and not just the amount of time they spend using it.

"Engagement is not a synonym for good mental health," said Mickens.

Facebook

Facebook VP Suggests a Fix: a Prompt Urging Teen Instagram Users to 'Take a Break' (engadget.com) 40

"Facebook is trying to mend its reputation in the wake of whisleblower Frances Haugen's testimony," reports Engadget, "and that includes promises of features lessening the potential harm for teens." CNN and Reuters report that Facebook Global Affairs VP Nick Clegg promised Instagram would introduce a "take a break" feature that encouraged teens to simply stop using the social network for a while.

Clegg didn't say when it would be ready, but this was clearly meant to reduce addiction and other unhealthy behavior.

The social media exec also said Facebook would "nudge" teens away from material in its apps that "may not be conducive to their well-being." He didn't provide specifics for this new approach. He did, however, suggest that Facebook's algorithms should be "held to account," including by regulation if needed, to be sure real-world results matched intentions... Breaks and nudges may reduce exposure to harmful content, but they won't remove the content in question. Clegg's statements also reflect a familiar strategy at Facebook. It likes to invite regulation, but only the regulation it's comfortable with. While the proposed changes could help, politicians may demand more — in part to prevent Facebook from dictating its own regulation.

According to Reuters, Clegg also "said he could not answer the question whether its algorithms amplified the voices of people who had attacked the U.S. Capitol on January 6th."
Facebook

Former Facebook Staffers React to Company's Unapologetic Response to Whistleblower (protocol.com) 70

"Facebook's efforts to undermine the testimony of whistleblower Frances Haugen began before she even left the Senate Commerce Committee hearing room Tuesday," reports Protocol.com: "Just pointing out the fact that @FrancesHaugen did not work on child safety or Instagram or research these issues and has no direct knowledge of the topic from her work at Facebook," spokesperson Andy Stone said in a tweet that ended up being read aloud by Republican Sen. Marsha Blackburn, during the hearing. Another statement from Policy Communications Director Lena Pietsch referred to Haugen dismissively as someone who "worked for the company for less than two years, had no direct reports" and "never attended a decision-point meeting with C-level executives."

For Nu Wexler, a former Facebook policy communications staffer, the anti-Haugen spin was overkill. "The statement they put out about Frances Haugen was beyond the pale," said Wexler, who also worked in policy communications at Google and Twitter. "As a former employee, I disagreed with what they said, and as a communications professional, I think it was really bad PR." The counterattack strategy has differed dramatically from the regretful responses Facebook has offered in past episodes, like the Cambridge Analytica scandal. In those cases, the company often responded with an apology and a plan. This time around, from Mark Zuckerberg on down, the company has been decidedly less apologetic, with Haugen as a case study for the new approach.

For some former Facebook employees watching from home, the experiment in public aggression is backfiring. From Wexler's point of view, Haugen demonstrated clear facility of the facts and familiarity with the industry. "They're going to have a hard time convincing people that she doesn't know what she's talking about," he said. Katie Harbath, a public policy director at Facebook for 10 years who left the company in March, said, "All these folks, whether they had direct reports or not, they all have perspective and expertise that should be heard...." Another former Facebook communications staffer called the company's response "a mistake." "It's not about her. The whole dialogue that's happening is not about whether she's a credible messenger or not," the former staffer said, before adding, "She is a pretty credible messenger...."

The remarks from Stone and Pietsch have also prompted former employees, some of whom held more senior roles during their time at Facebook, to publicly rally to Haugen's defense. "Well I was there for over 6 years, had numerous direct reports, and led many decision meetings with C-level execs, and I find the perspectives shared on the need for algorithmic regulation, research transparency, and independent oversight to be entirely valid for debate," tweeted Samidh Chakrabarti, who founded the civic integrity team Haugen worked on, and whose breakup she noted in her Senate testimony....

Facebook didn't respond to a question about why it's taking such an unapologetic approach toward Haugen's disclosures.

Harbath has a theory: "The other one wasn't working."

Python

Beating C and Java, Python Becomes the #1 Most Popular Programming Language, Says TIOBE (zdnet.com) 115

ZDNet reports that Python "is now the most popular language, according to one popularity ranking."

"For the first time in more than 20 years we have a new leader of the pack..." the TIOBE Index announced this month. "The long-standing hegemony of Java and C is over."

When Slashdot reached out to Guido van Rossum for a comment, he replied "I honestly don't know what the appropriate response is...! I am honored, and I want to thank the entire Python community for making Python so successful."

ZDNet reports: [I]t seems that Python is winning these days, in part because of the rise of data science and its ecosystem of machine-learning software libraries like NumPy, Pandas, Google's TensorFlow, and Facebook's PyTorch. Python is also an easy-to-learn language that has found a niche in high-end hardware, although less so mobile devices and the web — an issue that Python creator Guido van Rossum hopes to address through performance upgrades he's working on at Microsoft.

Tiobe, a Dutch software quality assurance company, has been tracking the popularity of programming languages for the past 20 years. Its rankings are based on search terms related to programming and is one measure of languages that developers should consider learning, along with IEEE Spectrum's list and a ranking produced by developer analyst RedMonk. JavaScript, the default for front-end web development, is always at the top of RedMonk's list. For Tiobe, its enterprise focus, has seen Java and C dominate in recent years, but Python has been snapping at the heels of Java, and has now overtaken it...

Python's move to top spot on the Tiobe index was a result of other languages falling in searches rather than Python rising. With an 11.27% share of searches, it was flat, while second place language C fell 5.79% percentage points compared to October last year down to 11.16%. Java made way for Python with a 2.11 percentage point drop to 10.46%.

Other languages that made the top 10 in Tiobe's October 2021 index: C++, C#, Visual Basic, JavaScript,. SQL, PHP, and Assemblyy Language. Also rising on a year-on-year basis and in the top 20 were Google-designed Go, number-crunching favorite MATLAB, and Fortran.

"Python, which started as a simple scripting language, as an alternative to Perl, has become mature," TIOBE says in announcing its new rankings.

"Its ease of learning, its huge amount of libraries, and its widespread use in all kinds of domains, has made it the most popular programming language of today. Congratulations Guido van Rossum!"
The Almighty Buck

136 Countries Agree To Minimum Corporate Tax Rate (cnn.com) 76

A group of 136 countries have agreed to a global treaty that would tax large multinationals at a minimum rate of 15% and require companies to pay taxes in the countries where they do business. CNN reports: Estonia, Hungary and -- most notably -- Ireland joined the agreement Thursday. It is now supported by all nations in the Organization for Economic Cooperation and Development and the G20. The countries that signed on to the international treaty represent more than 90% of global GDP. Four countries that participated in the talks -- Kenya, Nigeria, Pakistan and Sri Lanka -- have not yet joined the agreement. The Biden administration breathed new life into the global initiative earlier this year and secured the support of the G7 countries in June, paving the way for a preliminary deal in July. Ireland, which had declined to join the initial agreement in July, has a corporate tax rate of 12.5% -- a major factor in persuading companies such as Facebook, Apple and Google to locate their European headquarters in the country. Ireland signed up after the preliminary agreement was revised to remove a stipulation that rates should be set at a minimum of "at least 15%."

The new rate would apply to 1,556 multinationals based in Ireland, employing about 400,000 people. More than 160,000 businesses making less than $867 million in annual revenue and employing about 1.8 million people would still be taxed at 12.5%. Alongside a minimum corporate tax rate, the pact includes provisions to ensure that multinational companies pay tax where they generate sales and profits, and not just where they have a physical presence. That could have major ramifications for tech companies such as Google and Amazon, which have amassed vast profits in countries where they pay relatively little tax. The OECD expects implementation of the agreement to begin in 2023. But even with Ireland and other previous holdouts now on board, the deal still requires countries to pass domestic legislation.

Privacy

iPhone Apps No Better For Privacy Than Android, Oxford Study Finds (tomsguide.com) 22

An anonymous reader quotes a report from Tom's Guide: A new survey has reached a startling conclusion: iPhone apps tend to violate your privacy just as often as Android apps do. "Overall, we find that neither platform is clearly better than the other for privacy across the dimensions we studied," say the academic paper entitled "Are iPhones Really Better for Privacy?" and presented by researchers from the University of Oxford. "While it has been argued that the choice of smartphone architecture might protect user privacy, no clear winner between iOS and Android emerges from our analysis," the paper adds. "Data sharing for tracking purposes was common on both platforms." There's one big caveat regarding the new study: It was conducted before the introduction of iOS 14.5 in April 2021, which made opt-in to tracking and app privacy labels mandatory on iPhones.

The researchers analyzed the code, permissions and network traffic of 12,000 randomly selected free apps from each platform that had been updated or released in 2018 or later. Each app was run on a real device, either a first-generation iPhone SE running iOS 14.2 or a Google Nexus 5 running Android 7 Nougat. They found that nearly all (89%) of the Android apps contained at least one tracking library, which was almost always Google Play Services. The numbers weren't much lower on iOS, where 79% of apps had at least one tracking library, most likely Apple's own SKADNetwork, which tracks which ads a user clicks on. However, 62% of iOS apps also ran Google's AdMob ad tracking library, followed by 54% of iOS apps (and 58% of Android apps) running Google Firebase. Facebook trackers were in 28% of Android apps and 26% of iOS ones. In fact, most apps on either platforms -- 90% of Android apps and more than 60% of iOS -- shared data with tracking companies owned by Google. Almost all tracking companies observed were based in the U.S. About 9.5% of iOS apps and 5% of Android ones used Chinese-based trackers; 7.5% of iOS apps and 2% of Android ones used Indian trackers.
The team commended Apple for making it possible for iPhone users to block the temporary advertising IDs that flag your phone to advertisers, but the team also saw an ulterior motive on Apple's part. "Apple's crackdown on Ad ID use could be interpreted as an attempt to divert revenue from Google and other advertising providers, and motivate the use of alternative monetization models -- which are more lucrative for Apple," the Oxford research paper states. "Apple has arguably placed a larger emphasis on privacy, seeking to gain a competitive advantage by appealing to privacy-concerned consumers."
Facebook

Facebook Says Some of Its Services Are Having Issues Again (theverge.com) 32

Instagram has been experiencing issues for many of us here at The Verge, but it turns out that the problem might be broader than that, according to a statement from Facebook. From a report: "We're aware that some people are having trouble accessing our apps and products," Facebook said in a tweet. "We're working to get things back to normal as quickly as possible and we apologize for any inconvenience."
Facebook

Facebook Bans Developer Behind Unfollow Everything Tool (theverge.com) 84

A developer who made a tool that let people automatically unfollow friends and groups on Facebook says he's been banned permanently from the social networking site. From a report: Louis Barclay was the creator of "Unfollow Everything," a browser extension that allowed Facebook users to essentially delete their News Feed by unfollowing all their connections at once. Facebook allows users to individually unfollow friends, groups, and pages, which removes their content from the News Feed, the algorithmically-controlled heart of Facebook. Barclay's tool automated this process, instantly wiping users' News Feed.

[...] In response, Facebook sent Barclay a cease-and-desist letter earlier this year, saying he'd violated the site's terms of service by creating software that automated user interactions. Barclay says the company then "permanently disabled my Facebook and Instagram accounts" and "demanded that I agree to never again create tools that interact with Facebook or its other services."

Crime

Zodiac Expert Calls 'Bullshit' On Possible ID of Zodiac Killer (rollingstone.com) 30

"Tom Voigt, a Zodiac Killer expert and author who runs ZodiacKiller.com, pulls no punches when commenting on the story picked up by FoxNews that is now being posted at various news outlets including Slashdot," writes Slashdot reader ISayWeOnlyToBePolite. Rolling Stone spoke to Voigt on Wednesday about the bombshell report and why, in his opinion, it's "bullshit." From the article: By now obviously you've seen the news about the Zodiac Killer's identification. What's your take on it? Yeah, I've got about a million people on my website right now. It's all bullshit, by the way, just to get that out of the way. This is hot garbage. I don't know why it got any coverage at all. It was basically a press release.

Are you familiar with the Case Breakers? First of all, the funny thing is, I've never heard of any of these people that are these so-called experts. I have been doing this for 25 years and I've never heard of any of them. So that there are some red flags right off the bat. And then the funny thing is, they're matching up lines on foreheads. No witness ever described lines on Zodiac's forehead. Those lines were simply added by the sketch artist to fill in the sketch. The amended sketch, which is supposed to look more like Zodiac, according to witnesses, doesn't really even have any lines. So they got rid of them. So because the witnesses were like, "We're not really happy with that sketch that we gave you a few days ago," they got changed. The lines went away. No witness ever described that.

What about their claim that Poste's name unlocks one of the Zodiac's ciphers? A lot of what they're typing and talking about is nonsense. These people, what I've seen, they don't really have any kind of a command of the basics of the Zodiac case. From what I've read, they've gotten their Zodiac information from the comments section at Facebook. They'd skip the main article and they went right to the comments and they think they know everything about this. Maybe they've saw the Fincher movie, but probably not. Or, they turned it off after the two-hour mark or so.

If you had to put your money on one suspect, who would it be? Richard Gaikowski is my best bet. If I was if I was an employer looking to hire the Zodiac, he'd probably have the most impressive resume in my eyes. But the reality is that Allen is the suspect you just can't quit. I just can't quit that "Big Al," especially now I'm going over all these old emails and tips and leads going back 25 years. And some of the stuff that was that was said to me about about how it is just mind boggling. Yeah. If he wasn't, if he wasn't the Zodiac, he might be responsible for some other murders.

Security

Navy Facebook Account Hacked To Stream 'Age of Empires' (vice.com) 37

An anonymous reader quotes a report from Motherboard: The U.S. Navy has lost control of the official Facebook page for its destroyer-class warship, the USS Kidd. Someone has hacked the page and, for the past two days, done nothing but stream Age of Empires. The first stream went on for four hours. As first reported by Task & Purpose, the USS Kidd lost control of its Facebook account at 10:26 p.m. on October 3. The destroyer class warship then streamed Age of Empires for four hours under the headline "Hahahahaha." It's since streamed Age of Empires five more times, each time for at least an hour. Whoever is playing sucks, because they never make it past the Stone Age. As of this writing, the six videos are still up and watchable. The Navy confirmed to Task & Purpose that it had been hacked, adding: "We are currently working with Facebook technical support to resolve the issue."

Slashdot Top Deals