×
Youtube

YouTube Blocks 31st Ig Nobel Awards Ceremony, Citing Copyright on a Recording from 1914 (improbable.com) 130

The 31st annual Ig Nobel Prizes were awarded at a special ceremony on September 9th, announced the magazine responsible for the event, the Annals of Improbable Research.

But this week they made another announcement. "YouTube's notorious takedown algorithms are blocking the video of the 2021 Ig Nobel Prize ceremony." We have so far been unable to find a human at YouTube who can fix that. We recommend that you watch the identical recording on Vimeo.

Here's what triggered this: The ceremony includes bits of a recording (of tenor John McCormack singing "Funiculi, Funicula") made in the year 1914.

YouTube's takedown algorithm claims that the following corporations all own the copyright to that audio recording that was MADE IN THE YEAR 1914: "SME, INgrooves (on behalf of Emerald); Wise Music Group, BMG Rights Management (US), LLC, UMPG Publishing, PEDL, Kobalt Music Publishing, Warner Chappell, Sony ATV Publishing, and 1 Music Rights Societies"

Businesses

Apple Risks Losing Billions of Dollars Annually From Ruling (bloomberg.com) 61

Mark Gurman, reporting on Friday's ruling in Apple and Epic lawsuit: So how much does Apple stand to lose? That all comes down to how many developers try to bypass its payment system. Loup Venture's Gene Munster, a longtime Apple watcher, put the range at $1 billion to $4 billion, depending on how many developers take advantage of the new policy. Apple depicted the ruling as a victory, signaling that it's not too worried about the financial impact. "The court has affirmed what we've known all along: The App Store is not in violation of antitrust law" and "success is not illegal," Apple said in a statement. Kate Adams, the iPhone maker's general counsel, called the ruling a "resounding victory" that "underscores the merit" of its business.

Apple's adversary in the trial -- Epic Games, the maker of Fortnite -- also contended that the judge sided with Apple. This "isn't a win for developers or for consumers," Epic Chief Executive Officer Tim Sweeney said on Twitter. [...] Apple made about $3.8 billion in U.S. revenue from games in 2020, most of which came from in-app purchases, according to estimates from Sensor Tower. But even if the ruling ends up costing Apple a few billion dollars a year, that's still a small fraction of its total revenue. In fiscal 2021 alone, the company is estimated to bring in more than $360 billion, meaning the change won't make or break its overall financial performance. And many developers may choose to stick to Apple's payment system so they don't have to build their own web payment platform.

More concerns were shared by the EFF in a thread on Twitter. "Disappointingly, a court found that Apple is not a monopolist in mobile gaming or in-app transactions, so its App Store commissions don't violate antitrust law. One bright spot: the court found Apple's gag rules on app developers violate California law...

"The court's opinion spells out many serious problems with today's mobile app ecosystem, such as false tensions between user choice and user privacy. Congress can help with real antitrust reform and new legal tools, and shouldn't let Apple's privacywashing derail that work."
Privacy

'Apple's Device Surveillance Plan Is a Threat To User Privacy -- And Press Freedom' (freedom.press) 213

The Freedom of the Press Foundation is calling Apple's plan to scan photos on user devices to detect known child sexual abuse material (CSAM) a "dangerous precedent" that "could be misused when Apple and its partners come under outside pressure from governments or other powerful actors." They join the EFF, whistleblower Edward Snowden, and many other privacy and human rights advocates in condemning the move. Advocacy Director Parker Higgins writes: Very broadly speaking, the privacy invasions come from situations where "false positives" are generated -- that is to say, an image or a device or a user is flagged even though there are no sexual abuse images present. These kinds of false positives could happen if the matching database has been tampered with or expanded to include images that do not depict child abuse, or if an adversary could trick Apple's algorithm into erroneously matching an existing image. (Apple, for its part, has said that an accidental false positive -- where an innocent image is flagged as child abuse material for no reason -- is extremely unlikely, which is probably true.) The false positive problem most directly touches on press freedom issues when considering that first category, with adversaries that can change the contents of the database that Apple devices are checking files against. An organization that could add leaked copies of its internal records, for example, could find devices that held that data -- including, potentially, whistleblowers and journalists who worked on a given story. This could also reveal the extent of a leak if it is not yet known. Governments that could include images critical of its policies or officials could find dissidents that are exchanging those files.
[...]
Journalists, in particular, have increasingly relied on the strong privacy protections that Apple has provided even when other large tech companies have not. Apple famously refused to redesign its software to open the phone of an alleged terrorist -- not because they wanted to shield the content on a criminal's phone, but because they worried about the precedent it would set for other people who rely on Apple's technology for protection. How is this situation any different? No backdoor for law enforcement will be safe enough to keep bad actors from continuing to push it open just a little bit further. The privacy risks from this system are too extreme to tolerate. Apple may have had noble intentions with this announced system, but good intentions are not enough to save a plan that is rotten at its core.

Electronic Frontier Foundation

Edward Snowden and EFF Slam Apple's Plans To Scan Messages and iCloud Images (macrumors.com) 55

Apple's plans to scan users' iCloud Photos library against a database of child sexual abuse material (CSAM) to look for matches and childrens' messages for explicit content has come under fire from privacy whistleblower Edward Snowden and the Electronic Frontier Foundation (EFF). MacRumors reports: In a series of tweets, the prominent privacy campaigner and whistleblower Edward Snowden highlighted concerns that Apple is rolling out a form of "mass surveillance to the entire world" and setting a precedent that could allow the company to scan for any other arbitrary content in the future. Snowden also noted that Apple has historically been an industry-leader in terms of digital privacy, and even refused to unlock an iPhone owned by Syed Farook, one of the shooters in the December 2015 attacks in San Bernardino, California, despite being ordered to do so by the FBI and a federal judge. Apple opposed the order, noting that it would set a "dangerous precedent."

The EFF, an eminent international non-profit digital rights group, has issued an extensive condemnation of Apple's move to scan users' iCloud libraries and messages, saying that it is extremely "disappointed" that a "champion of end-to-end encryption" is undertaking a "shocking about-face for users who have relied on the company's leadership in privacy and security." The EFF highlighted how various governments around the world have passed laws that demand surveillance and censorship of content on various platforms, including messaging apps, and that Apple's move to scan messages and "iCloud Photos" could be legally required to encompass additional materials or easily be widened. "Make no mistake: this is a decrease in privacy for all "iCloud Photos" users, not an improvement," the EFF cautioned.

Programming

After YouTube-dl Incident, GitHub's DMCA Process Now Includes Free Legal Help (venturebeat.com) 30

"GitHub has announced a partnership with the Stanford Law School to support developers facing takedown requests related to the Digital Millennium Copyright Act (DMCA)," reports VentureBeat: While the DMCA may be better known as a law for protecting copyrighted works such as movies and music, it also has provisions (17 U.S.C. 1201) that criminalize attempts to circumvent copyright-protection controls — this includes any software that might help anyone infringe DMCA regulations. However, as with the countless spurious takedown notices delivered to online content creators, open source coders too have often found themselves in the DMCA firing line with little option but to comply with the request even if they have done nothing wrong. The problem, ultimately, is that freelance coders or small developer teams often don't have the resources to fight DMCA requests, which puts the balance of power in the hands of deep-pocketed corporations that may wish to use DMCA to stifle innovation or competition. Thus, GitHub's new Developer Rights Fellowship — in conjunction with Stanford Law School's Juelsgaard Intellectual Property and Innovation Clinic — seeks to help developers put in such a position by offering them free legal support.

The initiative follows some eight months after GitHub announced it was overhauling its Section 1201 claim review process in the wake of a takedown request made by the Recording Industry Association of America (RIAA), which had been widely criticized as an abuse of DMCA... [M]oving forward, whenever GitHub notifies a developer of a "valid takedown claim," it will present them with an option to request free independent legal counsel.

The fellowship will also be charged with "researching, educating, and advocating on DMCA and other legal issues important for software innovation," GitHub's head of developer policy Mike Linksvayer said in a blog post, along with other related programs.

Explaining their rationale, GitHub's blog post argues that currently "When developers looking to learn, tinker, or make beneficial tools face a takedown claim under Section 1201, it is often simpler and safer to just fold, removing code from public view and out of the common good.

"At GitHub, we want to fix this."
Electronic Frontier Foundation

EFF Sues US Postal Office For Records About Covert Social Media Spying Program (eff.org) 57

The Electronic Frontier Foundation (EFF) filed a Freedom of Information Act (FOIA) lawsuit against the U.S. Postal Service and its inspection agency seeking records about a covert program to secretly comb through online posts of social media users before street protests, raising concerns about chilling the privacy and expressive activity of internet users. From the press release: Under an initiative called Internet Covert Operations Program, analysts at the U.S. Postal Inspection Service (USPIS), the Postal Service's law enforcement arm, sorted through massive amounts of data created by social media users to surveil what they were saying and sharing, according to media reports. Internet users' posts on Facebook, Twitter, Parler, and Telegraph were likely swept up in the surveillance program. USPIS has not disclosed details about the program or any records responding to EFF's FOIA request asking for information about the creation and operation of the surveillance initiative. In addition to those records, EFF is also seeking records on the program's policies and analysis of the information collected, and communications with other federal agencies, including the Department of Homeland Security (DHS), about the use of social media content gathered under the program.

Media reports revealed that a government bulletin dated March 16 was distributed across DHS's state-run security threat centers, alerting law enforcement agencies that USPIS analysts monitored "significant activity regarding planned protests occurring internationally and domestically on March 20, 2021." Protests around the country were planned for that day, and locations and times were being shared on Parler, Telegram, Twitter, and Facebook, the bulletin said. "We're filing this FOIA lawsuit to shine a light on why and how the Postal Service is monitoring online speech. This lawsuit aims to protect the right to protest," said Houston Davidson, EFF public interest legal fellow. "The government has never explained the legal justifications for this surveillance. We're asking a court to order the USPIS to disclose details about this speech-monitoring program, which threatens constitutional guarantees of free expression and privacy."

Cellphones

Church Official Exposed Through America's 'Vast and Largely Unregulated Data-Harvesting' (nytimes.com) 101

The New York Times' On Tech newsletter shares a thought-provoking story: This week, a top official in the Roman Catholic Church's American hierarchy resigned after a news site said that it had data from his cellphone that appeared to show the administrator using the L.G.B.T.Q. dating app Grindr and regularly going to gay bars. Journalists had access to data on the movements and digital trails of his mobile phone for parts of three years and were able to retrace where he went.

I know that people will have complex feelings about this matter. Some of you may believe that it's acceptable to use any means necessary to determine when a public figure is breaking his promises, including when it's a priest who may have broken his vow of celibacy. To me, though, this isn't about one man. This is about a structural failure that allows real-time data on Americans' movements to exist in the first place and to be used without our knowledge or true consent. This case shows the tangible consequences of practices by America's vast and largely unregulated data-harvesting industries. The reality in the United States is that there are few legal or other restrictions to prevent companies from compiling the precise locations of where we roam and selling that information to anyone.

This data is in the hands of companies that we deal with daily, like Facebook and Google, and also with information-for-hire middlemen that we never directly interact with. This data is often packaged in bulk and is anonymous in theory, but it can often be traced back to individuals, as the tale of the Catholic official shows...

Losing control of our data was not inevitable. It was a choice — or rather a failure over years by individuals, governments and corporations to think through the consequences of the digital age.

We can now choose a different path.

"Data brokers are the problem," writes the EFF, arguing that the incident "shows once again how easy it is for anyone to take advantage of data brokers' stores to cause real harm." This is not the first time Grindr has been in the spotlight for sharing user information with third-party data brokers... But Grindr is just one of countless apps engaging in this exact kind of data sharing. The real problem is the many data brokers and ad tech companies that amass and sell this sensitive data without anything resembling real users' consent.

Apps and data brokers claim they are only sharing so-called "anonymized" data. But that's simply not possible. Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don't include a legal name. In particular, there's no such thing as "anonymous" location data. Data points like one's home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations. Another piece of the puzzle is the ad ID, another so-called "anonymous" label that identifies a device. Apps share ad IDs with third parties, and an entire industry of "identity resolution" companies can readily link ad IDs to real people at scale.

All of this underlines just how harmful a collection of mundane-seeming data points can become in the wrong hands... That's why the U.S. needs comprehensive data privacy regulation more than ever. This kind of abuse is not inevitable, and it must not become the norm.

Crime

A Threat to Privacy in the Expanded Use of License Plate-Scanning Cameras? (yahoo.com) 149

Long-time Slashdot reader BigVig209 shares a Chicago Tribune report "on how suburban police departments in the Chicago area use license plate cameras as a crime-fighting tool." Critics of the cameras note that only a tiny percentage of the billions of plates photographed lead to an arrest, and that the cameras generally haven't been shown to prevent crime. More importantly they say the devices are unregulated, track innocent people and can be misused to invade drivers' privacy. The controversy comes as suburban police departments continue to expand the use of the cameras to combat rising crime. Law enforcement officials say they are taking steps to safeguard the data. But privacy advocates say the state should pass a law to ensure against improper use of a nationwide surveillance system operated by private companies.

Across the Chicago area, one survey by the nonprofit watchdog group Muckrock found 88 cameras used by more than two dozen police agencies. In response to a surge in shootings, after much delay, state police are taking steps to add the cameras to area expressways. In the northwest suburbs, Vernon Hills and Niles are among several departments that have added license plate cameras recently. The city of Chicago has ordered more than 200 cameras for its squad cars. In Indiana, the city of Hammond has taken steps to record nearly every vehicle that comes into town.

Not all police like the devices. In the southwest suburbs, Darien and La Grange had issues in years past with the cameras making false readings, and some officers stopped using them...

Homeowner associations may also tie their cameras into the systems, which is what led to the arrest in Vernon Hills. One of the leading sellers of such cameras, Vigilant Solutions, a part of Chicago-based Motorola Solutions, has collected billions of license plate numbers in its National Vehicle Location Service. The database shares information from thousands of police agencies, and can be used to find cars across the country... Then there is the potential for abuse by police. One investigation found that officers nationwide misused agency databases hundreds of times, to check on ex-girlfriends, romantic rivals, or perceived enemies. To address those concerns, 16 states have passed laws restricting the use of the cameras.

The article cites an EFF survey which found 99.5% of scanned plates weren't under suspicion — "and that police shared their data with an average of 160 other agencies."

"Two big concerns the American Civil Liberties Union has always had about the cameras are that the information can be used to track the movements of the general population, and often is sold by operators to third parties like credit and insurance companies."
Electronic Frontier Foundation

'Golden Age of Surveillance', as Police Make 112,000 Data Requests in 6 Months (newportri.com) 98

"When U.S. law enforcement officials need to cast a wide net for information, they're increasingly turning to the vast digital ponds of personal data created by Big Tech companies via the devices and online services that have hooked billions of people around the world," reports the Associated Press: Data compiled by four of the biggest tech companies shows that law enforcement requests for user information — phone calls, emails, texts, photos, shopping histories, driving routes and more — have more than tripled in the U.S. since 2015. Police are also increasingly savvy about covering their tracks so as not to alert suspects of their interest... In just the first half of 2020 — the most recent data available — Apple, Google, Facebook and Microsoft together fielded more than 112,000 data requests from local, state and federal officials. The companies agreed to hand over some data in 85% of those cases. Facebook, including its Instagram service, accounted for the largest number of disclosures.

Consider Newport, a coastal city of 24,000 residents that attracts a flood of summer tourists. Fewer than 100 officers patrol the city — but they make multiple requests a week for online data from tech companies. That's because most crimes — from larceny and financial scams to a recent fatal house party stabbing at a vacation rental booked online — can be at least partly traced on the internet. Tech providers, especially social media platforms, offer a "treasure trove of information" that can help solve them, said Lt. Robert Salter, a supervising police detective in Newport.

"Everything happens on Facebook," Salter said. "The amount of information you can get from people's conversations online — it's insane."

As ordinary people have become increasingly dependent on Big Tech services to help manage their lives, American law enforcement officials have grown far more savvy about technology than they were five or six years ago, said Cindy Cohn, executive director of the Electronic Frontier Foundation, a digital rights group. That's created what Cohn calls "the golden age of government surveillance." Not only has it become far easier for police to trace the online trails left by suspects, they can also frequently hide their requests by obtaining gag orders from judges and magistrates. Those orders block Big Tech companies from notifying the target of a subpoena or warrant of law enforcement's interest in their information — contrary to the companies' stated policies...

Nearly all big tech companies — from Amazon to rental sites like Airbnb, ride-hailing services like Uber and Lyft and service providers like Verizon — now have teams to respond...

Cohn says American law is still premised on the outdated idea that valuable data is stored at home — and can thus be protected by precluding home searches without a warrant. At the very least, Cohn suggests more tech companies should be using encryption technology to protect data access without the user's key.

But Newport supervising police detective Lt. Robert Salter supplied his own answer for people worried about how police officers are requesting more and more data. "Don't commit crimes and don't use your computer and phones to do it."
Electronic Frontier Foundation

EFF Argues 'If Not Overturned, a Bad Copyright Decision Will Lead Many Americans to Lose Internet Access' (eff.org) 89

The EFF's senior staff attorney and their legal intern are warning that a bad copyright decision by a district court judge could lead many Americans to lose their internet access.

"In going after ISPs for the actions of just a few of their users, Sony Music, other major record labels, and music publishing companies have found a way to cut people off of the internet based on mere accusations of copyright infringement." When these music companies sued Cox Communications, an ISP, the court got the law wrong. It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business's account after a small number of accusations — perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered.

If not overturned, this decision will lead to an untold number of people losing vital internet access as ISPs start to cut off more and more customers to avoid massive damages...

The district court agreed with Sony that Cox is responsible when its subscribers — home and business internet users — infringe the copyright in music recordings by sharing them on peer-to-peer networks. It effectively found that Cox didn't terminate accounts of supposedly infringing subscribers aggressively enough. An earlier lawsuit found that Cox wasn't protected by the Digital Millennium Copyright Act's (DMCA) safe harbor provisions that protect certain internet intermediaries, including ISPs, if they comply with the DMCA's requirements. One of those requirements is implementing a policy of terminating "subscribers and account holders... who are repeat infringers" in "appropriate circumstances." The court ruled in that earlier case that Cox didn't terminate enough customers who had been accused of infringement by the music companies.

In this case, the same court found that Cox was on the hook for the copyright infringement of its customers and upheld the jury verdict of $1 billion in damages — by far the largest amount ever awarded in a copyright case.

The District Court got the law wrong... An ISP can be contributorily liable if it knew that a customer infringed on someone else's copyright but didn't take "simple measures" available to it to stop further infringement. Judge O'Grady's jury instructions wrongly implied that because Cox didn't terminate infringing users' accounts, it failed to take "simple measures." But the law doesn't require ISPs to terminate accounts to avoid liability. The district court improperly imported a termination requirement from the DMCA's safe harbor provision (which was already knocked out earlier in the case). In fact, the steps Cox took short of termination actually stopped most copyright infringement — a fact the district court simply ignored.

The district court also got it wrong on vicarious liability... [T]he court decided that because Cox could terminate accounts accused of copyright infringement, it had the ability to supervise those accounts. But that's not how other courts have ruled. For example, the Ninth Circuit decided in 2019 that Zillow was not responsible when some of its users uploaded copyrighted photos to real estate listings, even though Zillow could have terminated those users' accounts. In reality, ISPs don't supervise the Internet activity of their users. That would require a level of surveillance and control that users won't tolerate, and that EFF fights against every day.

The consequence of getting the law wrong on secondary liability here, combined with the $1 billion damage award, is that ISPs will terminate accounts more frequently to avoid massive damages, and cut many more people off from the internet than is necessary to actually address copyright infringement...

They also argue that the termination of accounts is "overly harsh in the case of most copyright infringers" — especially in a country where millions have only one choice for broadband internet access. "Being effectively cut off from society when an ISP terminates your account is excessive, given the actual costs of non-commercial copyright infringement to large corporations like Sony Music." It's clear that Judge O'Grady misunderstood the impact of losing Internet access. In a hearing on Cox's earlier infringement case in 2015, he called concerns about losing access "completely hysterical," and compared them to "my son complaining when I took his electronics away when he watched YouTube videos instead of doing homework."
Electronic Frontier Foundation

PayPal Shuts Down Long-Time Tor Supporter With No Recourse (eff.org) 152

An anonymous reader quotes a report from the Electronic Frontier Foundation: Larry Brandt, a long-time supporter of internet freedom, used his nearly 20-year-old PayPal account to put his money where his mouth is. His primary use of the payment system was to fund servers to run Tor nodes, routing internet traffic in order to safeguard privacy and avoid country-level censorship. Now Brandt's PayPal account has been shut down, leaving many questions unanswered and showing how financial censorship can hurt the cause of internet freedom around the world.

Brandt first discovered his PayPal account was restricted in March of 2021. Brandt reported to EFF: "I tried to make a payment to the hosting company for my server lease in Finland. My account wouldn't work. I went to my PayPal info page which displayed a large vertical banner announcing my permanent ban. They didn't attempt to inform me via email or phone -- just the banner." Brandt was unable to get the issue resolved directly through PayPal, and so he then reached out to EFF. [...] We found no evidence of wrongdoing that would warrant shutting down his account, and we communicated our concerns to PayPal. Given that the overwhelming majority of transactions on Brandt's account were payments for servers running Tor nodes, EFF is deeply concerned that Brandt's account was targeted for shut down specifically as a result of his activities supporting Tor.

We reached out to PayPal for clarification, to urge them to reinstate Brandt's account, and to educate them about Tor and its value in promoting freedom and privacy globally. PayPal denied that the shutdown was related to the concerns about Tor, claiming only that "the situation has been determined appropriately" and refusing to offer a specific explanation. After several weeks, PayPal has still refused to reinstate Brandt's account. [...] EFF is calling on PayPal to do better by its customers, and that starts by embracing the Santa Clara principles [which attempt to guide companies in centering human rights in their decisions to ban users or take down content]. Specifically, we are calling on them to: publish a transparency report, provide meaningful notice to users, and adopt a meaningful appeal process.
The Tor Project said in an email: "This is the first time we have heard about financial persecution for defending internet freedom in the Tor community. We're very concerned about PayPal's lack of transparency, and we urge them to reinstate this user's account. Running relays for the Tor network is a daily activity for thousands of volunteers and relay associations around the world. Without them, there is no Tor -- and without Tor, millions of users would not have access to the uncensored internet."

Brandt says he's not backing down and is still committed to supporting the Tor network to pay for servers around the world using alternative means. "Tor is of critical importance for anyone requiring anonymity of location or person," says Brandt. "I'm talking about millions of people in China, Iran, Syria, Belarus, etc. that wish to communicate outside their country but have prohibitions against such activities. We need more incentives to add to the Tor project, not fewer."
The Courts

College Student Sues Proctorio After Source Code Copyright Claim (theverge.com) 35

The Electronic Frontier Foundation (EFF) has filed a lawsuit against the remote testing company Proctorio on behalf of Miami University student Erik Johnson. The Verge reports: The lawsuit is intended to "quash a campaign of harassment designed to undermine important concerns" about the company's remote test-proctoring software, according to the EFF. The lawsuit intends to address the company's behavior toward Johnson in September of last year. After Johnson found out that he'd need to use the software for two of his classes, Johnson dug into the source code of Proctorio's Chrome extension and made a lengthy Twitter thread criticizing its practices -- including links to excerpts of the source code, which he'd posted on Pastebin. Proctorio CEO Mike Olsen sent Johnson a direct message on Twitter requesting that he remove the code from Pastebin, according to screenshots viewed by The Verge. After Johnson refused, Proctorio filed a copyright takedown notice, and three of the tweets were removed. (They were reinstated after TechCrunch reported on the controversy.)

In its lawsuit, the EFF is arguing that Johnson made fair use of Proctorio's code and that the company's takedown "interfered with Johnson's First Amendment right." "Copyright holders should be held liable when they falsely accuse their critics of copyright infringement, especially when the goal is plainly to intimidate and undermine them," said EFF Staff Attorney Cara Gagliano in a statement. "I'm doing this to stand up against student surveillance, as well as abuses of copyright law," Johnson told The Verge. "This isn't the first, and won't be the last time a company abuses copyright law to try and make criticism more difficult. If nobody calls out this abuse of power now, it'll just keep happening."

The Courts

Proctorio Sued For Using DMCA To Take Down a Student's Critical Tweets (techcrunch.com) 43

A university student is suing exam proctoring software maker Proctorio to "quash a campaign of harassment" against critics of the company, including an accusation that the company misused copyright laws to remove his tweets that were critical of the software. From a report: The Electronic Frontier Foundation, which filed the lawsuit this week on behalf of Miami University student Erik Johnson, who also does security research on the side, accused Proctorio of having "exploited the DMCA to undermine Johnson's commentary." Twitter hid three of Johnson's tweets after Proctorio filed a copyright takedown notice under the Digital Millennium Copyright Act, or DMCA, alleging that three of Johnson's tweets violated the company's copyright. Schools and universities have increasingly leaned on proctoring software during the pandemic to invigilate student exams, albeit virtually. Further reading: Proctorio Is Using Racist Algorithms To Detect Faces; Cheating-Detection Software Provokes 'School-Surveillance Revolt'; and Students Are Easily Cheating 'State-of-the-Art' Test Proctoring Tech.
Privacy

EFF Partners With DuckDuckGo (eff.org) 42

The Electronic Frontier Foundation (EFF) today announced it has enhanced its groundbreaking HTTPS Everywhere browser extension by incorporating rulesets from DuckDuckGo Smarter Encryption. According to the digital rights group's press release, HTTPS Everywhere is "a collaboration with The Tor Project and a key component of EFF's effort to encrypt the web and make the Internet ecosystem safe for users and website owners." From the press release: "DuckDuckGo Smarter Encryption has a list of millions of HTTPS-encrypted websites, generated by continually crawling the web instead of through crowdsourcing, which will give HTTPS Everywhere users more coverage for secure browsing," said Alexis Hancock, EFF Director of Engineering and manager of HTTPS Everywhere and Certbot web encrypting projects. "We're thrilled to be partnering with DuckDuckGo as we see HTTPS become the default protocol on the net and contemplate HTTPS Everywhere's future."

EFF began building and maintaining a crowd-sourced list of encrypted HTTPS versions of websites for a free browser extension -- HTTPS Everywhere -- which automatically takes users to them. That keeps users' web searching, pages visited, and other private information encrypted and safe from trackers and data thieves that try to intercept and steal personal information in transit from their browser. [...] DuckDuckGo, a privacy-focused search engine, also joined the effort with Smarter Encryption to help users browse securely by detecting unencrypted, non-secure HTTP connections to websites and automatically upgrading them to encrypted connections. With more domain coverage in Smarter Encryption, HTTPS Everywhere users are provided even more protection. HTTPS Everywhere rulesets will continue to be hosted through this year, giving our partners who use them time to adjust. We will stop taking new requests for domains to be added at the end of May.

Electronic Frontier Foundation

Privacy Advocate Confronts ACLU Over Its Use of Google and Facebook's Targeted Advertising (twitter.com) 20

Ashkan Soltani was the Chief Technologist of America's Federal Trade Commission in 2014 — and earlier was a staff technologist in its Division of Privacy and Identity Protection helping investigate tech companies including Google and Facebook

Friday on Twitter he accused another group of privacy violations: the nonprofit rights organization, the American Civil Liberties Union. Yesterday, the ACLU updated their privacy statement to finally disclose that they share constituent information with 'service providers' like Facebook for targeted advertising, flying in the face of the org's public advocacy and statements.

In fact, I was retained by the ACLU last summer to perform a privacy audit after concerns were raised internally regarding their data sharing practices. I only agreed to do this work on the promisee by ACLU's Executive Director that the findings would be made public. Unfortunately, after reviewing my findings, the ACLU decided against publishing my report and instead sat on it for ~6 months before quietly updating their terms of service and privacy policy without explanation for the context or motivations for doing so. While I'm bound by a nondisclosure agreement to not disclose the information I uncovered or my specific findings, I can say with confidence that the ACLU's updated privacy statements do not reflect the full picture of their practices.

For example, public transparency data from Google shows that the ACLU has paid Google nearly half a million dollars to deliver targeted advertisements since 2018 (when the data first was made public). The ACLU also opted to only disclose its advertising relationship with Facebook only began in 2021, when in truth, the relationship spans back years totaling over $5 million in ad-spend. These relationships fly against the principles and public statements of the ACLU regarding transparency, control, and disclosure before use, even as the organization claims to be a strong advocate for privacy rights at the federal and state level. In fact, the NY Attorney General conducted an inquiry into whether the ACLU had violated its promises to protect the privacy of donors and members in 2004. The results of which many aren't aware of. And to be clear, the practices described would very much constitute a 'sale' of members' PII under the California Privacy Rights Act (CPRA).

The irony is not lost on me that the ACLU vehemently opposed the CPRA — the toughest state privacy law in the country — when it was proposed. While I have tremendous respect for the work the ACLU and other NGOs do, it's important that nonprofits are bound by the same privacy standards they espouse for everyone else. (Full disclosure: I'm on the EFF advisory board and was recently invited to join EPIC's board.)

My experience with the ACLU further amplifies the need to have strong legal privacy protections that apply to nonprofits as well as businesses — partially since many of the underlying practices, particularly in the area of fundraising and advocacy, are similar if not worse.

Soltani also re-tweeted an interesting response from Alex Fowler, a former EFF VP who was also Mozilla's chief privacy officer for three years: I'm reminded of EFF co-founder John Gilmore telling me about the Coders' Code: If you find a bug or vulnerability, tell the coder. If coder ignores you or refuses to fix the issue, tell the users.
Open Source

Richard Stallman's Return Denounced by the EFF, Tor Project, Mozilla, and the Creator of Rust (itwire.com) 640

Sunday IT Wire counted up the number of signatories on two open letters, one opposing Richard Stallman's return to the FSF and one supporting it.

- The pro-Stallman letter had 3,632 individual signers
- The anti-Stallman letter had 2,812 individual signers (plus 48 companies and organizations).

But the question of Stallman's leadership has now also arisen in the GCC community:

A long-time developer of GCC, the compiler created by the GNU Project and used in Linux distributions, has issued a call for the removal of Free Software Founder Richard Stallman from the GCC steering committee. Nathan Sidwell [also a software engineer at Facebook] said in a post directed to the committee that if it was unwilling to remove Stallman, then the panel should explain why it was not able to do so.

Stallman is also the founder of the GNU Project and the original author of GCC.

"RMS [Stallman] is no longer a developer of GCC, the most recent commit I can find regards SCO in 2003," Sidwell wrote in a long email. "Prior to that there were commits in 1997, but significantly less than 1994 and earlier. GCC's implementation language is now C++, which I believe RMS neither uses nor likes.

"When was RMS' most recent positive input to the GCC project? Even if it was recent and significant, that doesn't mean his toxic behaviour should be accepted."

Meanwhile, the following groups have also issued statements opposing Stallman's return to the FSF:

- Mozilla: We can't demand better of the internet if we don't demand better of our leaders, colleagues and ourselves. We're with the Open Source Diversity Community, Outreachy & the Software Conservancy project in supporting this petition.
- The Tor Project: The Tor Project is joining calls for Richard M. Stallman to be removed from board, staff, volunteer, and other leadership positions in the FOSS community, including the Free Software Foundation and the GNU Project.
Rust creator Graydon Hoare: He's been saying sexist shit & driving women away for decades. He can't change, the FSF board knows it, is sending a "sexism doesn't matter" message. This is bad leadership and I'm sad about all of it, agree with calls to resign.

If someone is a public leader their public behaviour matters. I don't criticize private individuals here and I don't think twitter-justice is especially nuanced. But this is so far over the line, such a stupid and tone-deaf choice, and it is about community leadership.

The EFF: We at EFF are profoundly disappointed to hear of the re-election of Richard Stallman to a leadership position at the Free Software Foundation, after a series of serious accusations of misconduct led to his resignation as president and board member of the FSF in 2019. We are also disappointed that this was done despite no discernible steps taken by him to be accountable for, much less make amends for, his past actions or those who have been harmed by them. Finally, we are also disturbed by the secretive process of his re-election, and how it was belatedly conveyed to FSF's staff and supporters.

Stallman's re-election sends a wrong and hurtful message to free software movement, as well as those who have left that movement because of Stallman's previous behavior.

Free software is a vital component of an open and just technological society: its key institutions and individuals cannot place misguided feelings of loyalty above their commitment to that cause. The movement for digital freedom is larger than any one individual contributor, regardless of their role. Indeed, we hope that this moment can be an opportunity to bring in new leaders and new ideas to the free software movement.

We urge the voting members of the FSF1 to call a special meeting to reconsider this decision, and we also call on Stallman to step down: for the benefit of the organization, the values it represents, and the diversity and long-term viability of the free software movement as a whole.

Finally, the Free Software Foundation itself has now pinned the following tweet at the top of its Twitter feed: No LibrePlanet organizers (staff or volunteer), speakers, award winners, exhibitors, or sponsors were made aware of Richard Stallman's announcement until it was public.
Social Networks

Stricter Rules for Internet Platforms? What are the Alternatives... (acm.org) 83

A law professor serving on the EFF's board of directors (and advisory boards for the Electronic Privacy Information Center and the Center for Democracy and Technology) offers this analysis of "the push for stricter rules for internet platforms," reviewing proposed changes to the liability-limiting Section 230 of the Communications Decency Act — and speculating about what the changes would accomplish: Short of repeal, several initiatives aim to change section 230. Eleven bills have been introduced in the Senate and nine in the House of Representatives to amend section 230 in various ways.... Some would widen the categories of harmful conduct for which section 230 immunity is unavailable. At present, section 230 does not apply to user-posted content that violates federal criminal law, infringes intellectual property rights, or facilitates sex trafficking. One proposal would add to this list violations of federal civil laws.

Some bills would condition section 230 immunity on compliance with certain conditions or make it unavailable if the platforms engage in behavioral advertising. Others would require platforms to spell out their content moderation policies with particularity in their terms of service (TOS) and would limit section 230 immunity to TOS violations. Still others would allow users whose content was taken down in "bad faith" to bring a lawsuit to challenge this and be awarded $5,000 if the challenge was successful. Some bills would impose due process requirements on platforms concerning removal of user-posted content. Other bills seek to regulate platform algorithms in the hope of stopping the spread of extremist content or in the hope of eliminating biases...

Neither legislation nor an FCC rule-making may be necessary to significantly curtail section 230 as a shield from liability. Conservative Justice Thomas has recently suggested a reinterpretation of section 230 that would support imposing liability on Internet platforms as "distributors" of harmful content... Section 230, after all, shields these services from liability as "speakers" and "publishers," but is silent about possible "distributor" liability. Endorsing this interpretation would be akin to adopting the notice-and-takedown rules that apply when platforms host user-uploaded files that infringe copyrights.

Thanks to Slashdot reader Beeftopia for sharing the article, which ultimately concludes: - Notice-and-takedown regimes have long been problematic because false or mistaken notices are common and platforms often quickly take-down challenged content, even if it is lawful, to avoid liability...

- For the most part, these platforms promote free speech interests of their users in a responsible way. Startup and small nonprofit platforms would be adversely affected by some of the proposed changes insofar as the changes would enable more lawsuits against platforms for third-party content. Fighting lawsuits is costly, even if one wins on the merits.

- Much of the fuel for the proposed changes to section 230 has come from conservative politicians who are no longer in control of the Senate.

- The next Congress will have a lot of work to do. Section 230 reform is unlikely to be a high priority in the near term. Yet, some adjustments to that law seem quite likely over time because platforms are widely viewed as having too much power over users' speech and are not transparent or consistent about their policies and practices.

Social Networks

Can WhatsApp Stop Spreading Misinformation Without Compromising Encryption? (qz.com) 149

"WhatsApp, the Facebook-owned messaging platform used by 2 billion people largely in the global south, has become a particularly troublesome vector for misinformation," writes Quartz — though it's not clear what the answer is: The core of the problem is its use of end-to-end encryption, a security measure that garbles users' messages while they travel from one phone to another so that no one other than the sender and the recipient can read them. Encryption is a crucial privacy protection, but it also prevents WhatsApp from going as far as many of its peers to moderate misinformation. The app has taken some steps to limit the spread of viral messages, but some researchers and fact-checkers argue it should do more, while privacy purists worry the solutions will compromise users' private conversations...

In April 2020, WhatsApp began slowing the spread of "highly forwarded messages," the smartphone equivalent of 1990s chain emails. If a message has already been forwarded five times, you can only forward it to one person or group at a time. WhatsApp claims that simple design tweak cut the spread of viral messages by 70%, and fact-checkers have cautiously cheered the change. But considering that all messages are encrypted, it's impossible to know how much of an impact the cut had on misinformation, as opposed to more benign content like activist organizing or memes. Researchers who joined and monitored several hundred WhatsApp groups in Brazil, India, and Indonesia found that limiting message forwarding slows down viral misinformation, but doesn't necessarily limit how far the messages eventually spread....

This isn't just a semantic argument, says EFF strategy director Danny O'Brien. Even the smallest erosion of encryption protections gives Facebook a toehold to begin scanning messages in a way that could later be abused, and protecting the sanctity of encryption is worth giving up a potential tool for curbing misinformation. "This is a consequence of a secure internet," O'Brien says. "Dealing with the consequences of that is going to be a much more positive step than dealing with the consequences of an internet where no one is secure and no one is private...."

No matter what WhatsApp does, it will have to contend with dueling constituencies: the privacy hawks who see the app's encryption as its most important feature, and the fact-checkers who are desperate for more tools to curb the spread of misinformation on a platform that counts a quarter of the globe among its users.

Whatever Facebook decides will have widespread consequences in a world witnessing the simultaneous rise of fatal lies and techno-authoritarianism.

Google

Google's FLoC Is a Terrible Idea (eff.org) 119

Earlier this week, Google said that after it finishes phasing out third-party cookies over the next year or so, it won't introduce other forms of identifiers to track individuals as they browse across the web. Instead, the search giant plans to use something called Federated Learning of Cohorts (FLoC), which the company says has shown promising results. In a deep-dive, EFF has outlined several issues surrounding the usage of FLoC. The introductory excerpt follows: The third-party cookie is dying, and Google is trying to create its replacement. No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet. Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn't learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious -- and potentially the most harmful.

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting. Google's pitch to privacy advocates is that a world with FLoC (and other elements of the "privacy sandbox") will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between "old tracking" and "new tracking." It's not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web's biggest mistake. Ahead of us are two possible futures. In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them -- or leveraged to manipulate them -- when they next open a tab. In the other, each user's behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is "democratized" and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here's what I've been up to this week, please treat me accordingly. Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

The Courts

Accused Murderer Wins Right To Check Source Code of DNA Testing Kit (theregister.com) 167

"A New Jersey appeals court has ruled that a man accused of murder is entitled to review proprietary genetic testing software to challenge evidence presented against him," reports The Register.

Long-time Slashdot reader couchslug shared their report: The maker of the software, Cybergenetics, has insisted in lower court proceedings that the program's source code is a trade secret. The co-founder of the company, Mark Perlin, is said to have argued against source code analysis by claiming that the program, consisting of 170,000 lines of MATLAB code, is so dense it would take eight and a half years to review at a rate of ten lines an hour. The company offered the defense access under tightly controlled conditions outlined in a non-disclosure agreement, which included accepting a $1m liability fine in the event code details leaked. But the defense team objected to the conditions, which they argued would hinder their evaluation and would deter any expert witness from participating...

Those arguing on behalf of the defense cited past problems with other genetic testing software such as STRmix and FST (Forensic Statistical Tool). Defense expert witnesses Mats Heimdahl and Jeanna Matthews, for example, said that STRmix had 13 coding errors that affected 60 criminal cases, errors not revealed until a source code review. They also pointed out, as the appeals court ruling describes, how an FST source code review "uncovered that a 'secret function...was present in the software, tending to overestimate the likelihood of guilt.'"

EFF activists have already filed briefs in multiple courts "warning of the danger of secret software being used to convict criminal defendants," reports an EFF blog post.

"No one should be imprisoned or executed based on secret evidence that cannot be fairly evaluated for its reliability, and the ruling in this case will help prevent that injustice."

Slashdot Top Deals