Privacy

Illinois Governor Approves Business-Friendly Overhaul of Biometric Privacy Law (reuters.com) 38

Illinois Governor J.B. Pritzker has signed a bill into law that will significantly curb the penalties companies could face for improperly collecting and using fingerprints and other biometric data from workers and consumers. From a report: The bill passed by the legislature in May and signed by Pritzker, a Democrat, on Friday amends the state's Biometric Information Privacy Act (BIPA) so that companies can be held liable only for a single violation per person, rather than for each time biometric data is allegedly misused.

The amendments will dramatically limit companies' exposure in BIPA cases and could discourage plaintiffs' lawyers from filing many lawsuits in the first place, management-side lawyers said. "By limiting statutory damages to a single recovery per individual ... companies in most instances will no longer face the prospect of potentially annihilative damages awards that greatly outpace any privacy harms," David Oberly, of counsel at Baker Donelson in Washington, D.C., said before the bill was signed. BIPA, a 2008 law, requires companies to obtain permission before collecting fingerprints, retinal scans and other biometric information from workers and consumers. The law imposes penalties of $1,000 per violation and $5,000 for reckless or intentional violations.

Security

Every Microsoft Employee Is Now Being Judged on Their Security Work (theverge.com) 100

Reeling from security and optics issues, Microsoft appears to be trying to correct its story. An anonymous reader shares a report: Microsoft made it clear earlier this year that it was planning to make security its top priority, following years of security issues and mounting criticisms. Starting today, the software giant is now tying its security efforts to employee performance reviews. Kathleen Hogan, Microsoft's chief people officer, has outlined what the company expects of employees in an internal memo obtained by The Verge. "Everyone at Microsoft will have security as a Core Priority," says Hogan. "When faced with a tradeoff, the answer is clear and simple: security above all else."

A lack of security focus for Microsoft employees could impact promotions, merit-based salary increases, and bonuses. "Delivering impact for the Security Core Priority will be a key input for managers in determining impact and recommending rewards," Microsoft is telling employees in an internal Microsoft FAQ on its new policy. Microsoft has now placed security as one of its key priorities alongside diversity and inclusion. Both are now required to be part of performance conversations -- internally called a "Connect" -- for every employee, alongside priorities that are agreed upon between employees and their managers.

Social Networks

Founder of Collapsed Social Media Site 'IRL' Charged With Fraud Over Faked Users (bbc.com) 22

This week America's Securities and Exchange Commission filed fraud charges against the former CEO of the startup social media site "IRL"

The BBC reports: IRL — which was once considered a potential rival to Facebook — took its name from its intention to get its online users to meet up in real life. However, the initial optimism evaporated after it emerged most of IRL's users were bots, with the platform shutting in 2023...

The SEC says it believes [CEO Abraham] Shafi raised about $170m by portraying IRL as the new success story in the social media world. It alleges he told investors that IRL had attracted the vast majority its supposed 12 million users through organic growth. In reality, it argues, IRL was spending millions of dollars on advertisements which offered incentives to prospective users to download the IRL app. That expenditure, it is alleged, was subsequently hidden in the company's books.

IRL received multiple rounds of venture capital financing, eventually reaching "unicorn status" with a $1.17 billion valuation, according to TechCrunch. But it shut down in 2023 "after an internal investigation by the company's board found that 95% of the app's users were 'automated or from bots'."

TechCrunch notes it's the second time in the same week — and at least the fourth time in the past several months — that the SEC has charged a venture-backed founder on allegations of fraud... Earlier this week, the SEC charged BitClout founder Nader Al-Naji with fraud and unregistered offering of securities, claiming he used his pseudonymous online identity "DiamondHands" to avoid regulatory scrutiny while he raised over $257 million in cryptocurrency. BitClout, a buzzy crypto startup, was backed by high-profile VCs such as a16z, Sequoia, Chamath Palihapitiya's Social Capital, Coinbase Ventures and Winklevoss Capital.

In June, the SEC charged Ilit Raz, CEO and founder of the now-shuttered AI recruitment startup Joonko, with defrauding investors of at least $21 million. The agency alleged Raz made false and misleading statements about the quantity and quality of Joonko's customers, the number of candidates on its platform and the startup's revenue.

The agency has also gone after venture firms in recent months. In May, the SEC charged Robert Scott Murray and his firm Trillium Capital LLC with a fraudulent scheme to manipulate the stock price of Getty Images Holdings Inc. by announcing a phony offer by Trillium to purchase Getty Images.

Programming

DARPA Wants to Automatically Transpile C Code Into Rust - Using AI (theregister.com) 236

America's Defense Department has launched a project "that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust," reports the Register — with an online event already scheduled later this month for those planning to submit proposals: The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope [that's the Defense Department's R&D agency] is that AI models can help with the programming language translation, in order to make software more secure. "You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement. "The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance...."

DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered. "After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough...."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed. "I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

DARPA's statement had an ambitious headline: "Eliminating Memory Safety Vulnerabilities Once and For All."

"Rust forces the programmer to get things right," said DARPA project manager Wallach. "It can feel constraining to deal with all the rules it forces, but when you acclimate to them, the rules give you freedom. They're like guardrails; once you realize they're there to protect you, you'll become free to focus on more important things."

Code Metal's Morales called the project "a DARPA-hard problem," noting the daunting number of edge cases that might come up. And even DARPA's program manager conceded to the Register that "some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

Thanks to long-time Slashdot reader RoccamOccam for sharing the news.
Government

Is the 'Kids Online Safety Act' Losing Momentum? (theguardian.com) 40

America's Senate "overwhelmingly passed major online safety reforms to protect children on social media," reports the Guardian.

"But with ongoing pushback from the tech industry and freedom of speech organizations, the legislation faces an uncertain future in the House." "It's a terrible idea to let politicians and bureaucrats decide what people should read and view online," freedom of speech group the Electronic Frontier Foundation said of the Senate's passage of Kosa... Advocates of Kosa reject these critiques, noting the bill has been revised to address many of those concerns — including shifting enforcement from attorneys general to the federal trade commission and focusing the "duty of care" provisions on product design features of the site or app rather than content specifically. A number of major LGBTQ+ groups dropped their opposition to the legislation following these changes, including the Human Rights Campaign, GLAAD and the Trevor Project.

After passing the Senate this week, the bill has now moved onto the House, which is on a six-week summer recess until September. Proponents are now directing their efforts towards House legislators to turn the bill into law. Joe Biden has indicated he would sign it if it passes. In a statement Tuesday encouraging the House to pass the legislation, the US president said: "We need action by Congress to protect our kids online and hold big tech accountable for the national experiment they are running on our children for profit...."

House speaker Mike Johnson of Louisiana has expressed support for moving forward on Kosa and passing legislation this Congress, but it's unclear if he will bring the bill up in the House immediately. Some experts say the bill is unlikely to be passed in the House in the form passed by the Senate. "Given the concerns about potential censorship and the possibility of minors' lacking access to vital information, pausing KOSA makes eminent sense," said Gautam Hans, associate clinical professor of law and associate director of the First Amendment Clinic at Cornell Law School. He added that the House may put forward its own similar legislation instead, or modify KOSA to further address some of these concerns.

The political news site Punchbowl News also noted this potentially significant quote: A House GOP leadership aide told us this about KOSA: "We've heard concerns across our Conference and the Senate bill cannot be brought up in its current form."
TechDirt argues that "Senator Rand Paul's really excellent letter laying out the reasons he couldn't support the bill may have had an impact."

Thanks to long-time Slashdot reader SonicSpike for sharing the news.
AI

NIST Releases an Open-Source Platform for AI Safety Testing (scmagazine.com) 4

America's National Institute of Standards and Technology (NIST) has released a new open-source software tool called Dioptra for testing the resilience of machine learning models to various types of attacks.

"Key features that are new from the alpha release include a new web-based front end, user authentication, and provenance tracking of all the elements of an experiment, which enables reproducibility and verification of results," a NIST spokesperson told SC Media: Previous NIST research identified three main categories of attacks against machine learning algorithms: evasion, poisoning and oracle. Evasion attacks aim to trigger an inaccurate model response by manipulating the data input (for example, by adding noise), poisoning attacks aim to impede the model's accuracy by altering its training data, leading to incorrect associations, and oracle attacks aim to "reverse engineer" the model to gain information about its training dataset or parameters, according to NIST.

The free platform enables users to determine to what degree attacks in the three categories mentioned will affect model performance and can also be used to gauge the use of various defenses such as data sanitization or more robust training methods.

The open-source testbed has a modular design to support experimentation with different combinations of factors such as different models, training datasets, attack tactics and defenses. The newly released 1.0.0 version of Dioptra comes with a number of features to maximize its accessibility to first-party model developers, second-party model users or purchasers, third-party model testers or auditors, and researchers in the ML field alike. Along with its modular architecture design and user-friendly web interface, Dioptra 1.0.0 is also extensible and interoperable with Python plugins that add functionality... Dioptra tracks experiment histories, including inputs and resource snapshots that support traceable and reproducible testing, which can unveil insights that lead to more effective model development and defenses.

NIST also published final versions of three "guidance" documents, according to the article. "The first tackles 12 unique risks of generative AI along with more than 200 recommended actions to help manage these risks. The second outlines Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, and the third provides a plan for global cooperation in the development of AI standards."

Thanks to Slashdot reader spatwei for sharing the news.
Government

Artist and Musician Sue SEC Over Its NFT Regulatory Jurisdiction (decrypt.co) 32

"Five years ago, Brian Frye set an elaborate trap," writes Decrypt.co. "Now the law professor is teaming up with a singer-songwriter to finally spring it" on America's Security and Exchange Commission "in a novel lawsuit — and in the process, prevent the regulator from ever coming after NFT art projects again." Over and again, the SEC has sued cherry-picked NFT projects it says qualify as unregistered securities — but never once has the regulator defined what types of NFT projects are legal and which are not, casting a chill over the nascent industry... [In 2019] Frye, an expert in securities law and a fan of novel technologies, minted an NFT of a letter he sent to the SEC in which he declared his art project to constitute an illegal, unregistered security. If the conceptual art project wasn't a security, Frye challenged the agency, then it needed to say so. The SEC never responded to Frye — not then, and not after several more self-incriminating correspondences from the professor. But in due time, the agency began vigorously pursuing, and suing, NFT projects.
So 10 months ago, Jonathan Mann — who writes a new song every day and shares it online — crafted a song titled "This Song is A Security." As a seller of NFTs himself, Mann wrote the song "to fight back against the SEC, and defend his right — plus the rights of other artists like him — to earn revenue," according to the article: Frye, who'd practically been salivating for such an opportunity for half a decade, was a natural fit.... In the lawsuit filed against the SEC in Louisiana earlier this week, they challenged the SEC's standing to regulate their NFT-backed artworks as securities, and demanded the agency declare that their respective art projects do not constitute illegal, unregistered securities offerings.
More from the International Business Times: The complaint asked the court to clarify whether the SEC should regulate art and whether artists were supposed to "register" their artworks before selling the pieces to the general public. The complaint also asked whether artists should be "forced to make public disclosures about the 'risks' of buying their art," and whether artists should be "required to comply" with federal securities laws...

The Blockchain Association, a collective crypto group that includes some of the biggest digital asset firms, asserted that the SEC has no authority over NFT art. "We support the plaintiffs in their quest for legal clarity," the group said.

In an interview with Slashdot, Mann says he started his "Song a Day" project almost 17 years ago (when he was 26 years old) — and his interest in NFTs is sincere: "Over the years, I've always sought a way to make Song A Day sustainable financially, through video contests, conference gigs, ad revenue, royalties, Patreon and more.

"When I came across NFTs in 2017, they didn't have a name. We just called them 'digital collectibles'. For the last 2+ years, NFTs have become that self-sustaining model for my work.

"I know most people believe NFTs are a joke at best and actively harmful at worst. Even most people in the crypto community have given up on them. Despite all that, I still believe they're worth pursuing.

"Collecting an NFT from an artist you love is the most direct way to support them. There's no multinational corporation, no payment processor, and no venture capitalists between you and the artist you want to support."

Slashdot also tracked down the SEC's Office of Public Affairs, and got an official response from SEC public affairs specialist Ryan White.

Slashdot: The suit argues that the SEC's approach "threatens the livelihoods of artists and creators that are simply experimenting with a novel, fast-growing technology," and seeks guidance in the face of a "credible threat of enforcement". Is the SEC going to respond to this lawsuit? And if you don't have an answer at this time, can you give me a general comment on the issues and concerns being raised?

SEC Public Affairs Specialist Ryan White: We would decline comment.

Decrypt.co points out that the lawsuit "has no guarantee of offering some conclusive end to the NFT regulation question... That may only come with concrete legislation or a judgment by the Supreme Court."

But Mann's song still makes a very public show out of their concerns — with Mann even releasing a follow-up song titled "I'm Suing the SEC." (Its music video mixes together wacky clips of Mila Kunis's Stoner Cats and Fonzie jumping a shark with footage of NFT critics like Elizabeth Warren and SEC chairman Gary Gensler.)

And an earlier song also used auto-tune to transform Gensler's remarks about cryptocurrencies into the chorus of a song titled "Hucksters, Fraudsters, Scam Artists, Ponzi Schemes".

Mann later auctioned an NFT of the song — for over $3,000 in Ethereum.
Government

Why DARPA is Funding an AI-Powered Bug-Spotting Challenge (msn.com) 43

Somewhere in America's Defense Department, the DARPA R&D agency is running a two-year contest to write an AI-powered program "that can scan millions of lines of open-source code, identify security flaws and fix them, all without human intervention," reports the Washington Post. [Alternate URL here.]

But as they see it, "The contest is one of the clearest signs to date that the government sees flaws in open-source software as one of the country's biggest security risks, and considers artificial intelligence vital to addressing it." Free open-source programs, such as the Linux operating system, help run everything from websites to power stations. The code isn't inherently worse than what's in proprietary programs from companies like Microsoft and Oracle, but there aren't enough skilled engineers tasked with testing it. As a result, poorly maintained free code has been at the root of some of the most expensive cybersecurity breaches of all time, including the 2017 Equifax disaster that exposed the personal information of half of all Americans. The incident, which led to the largest-ever data breach settlement, cost the company more than $1 billion in improvements and penalties.

If people can't keep up with all the code being woven into every industrial sector, DARPA hopes machines can. "The goal is having an end-to-end 'cyber reasoning system' that leverages large language models to find vulnerabilities, prove that they are vulnerabilities, and patch them," explained one of the advising professors, Arizona State's Yan Shoshitaishvili.... Some large open-source projects are run by near-Wikipedia-size armies of volunteers and are generally in good shape. Some have maintainers who are given grants by big corporate users that turn it into a job. And then there is everything else, including programs written as homework assignments by authors who barely remember them.

"Open source has always been 'Use at your own risk,'" said Brian Behlendorf, who started the Open Source Security Foundation after decades of maintaining a pioneering free server software, Apache, and other projects at the Apache Software Foundation. "It's not free as in speech, or even free as in beer," he said. "It's free as in puppy, and it needs care and feeding."

40 teams entered the contest, according to the article — and seven received $1 million in funding to continue on to the next round, with the finalists to be announced at this year's Def Con, according to the article.

"Under the terms of the DARPA contest, all finalists must release their programs as open source," the article points out, "so that software vendors and consumers will be able to run them."
Medicine

US Prepares For Bird Flu Pandemic With $176 Million Moderna Vaccine Deal 184

An anonymous reader quotes a report from Ars Technica: The US government will pay Moderna $176 million to develop an mRNA vaccine against a pandemic influenza -- an award given as the highly pathogenic bird flu virus H5N1 continues to spread widely among US dairy cattle. The funding flows through BARDA, the Biomedical Advanced Research and Development Authority, as part of a new Rapid Response Partnership Vehicle (RRPV) Consortium. The program is intended to set up partnerships with industry to help the country better prepare for pandemic threats and develop medical countermeasures, the Department of Health and Human Services said in a press announcement Tuesday.

In its own announcement on Tuesday, Moderna noted that it began a Phase 1/2 trial of a pandemic influenza virus vaccine last year, which included versions targeting H5 and H7 varieties of bird flu viruses. The company said it expects to release the results of that trial this year and that those results will direct the design of a Phase 3 trial, anticipated to begin in 2025. The funding deal will support late-stage development of a "pre-pandemic vaccine against H5 influenza virus," Moderna said. But, the deal also includes options for additional vaccine development in case other public health threats arise.

US health officials have said previously that they were in talks with Moderna and Pfizer about the development of a pandemic bird flu vaccine. The future vaccine will be in addition to standard protein-based bird flu vaccines that are already developed. In recent weeks, the health department has said it is working to manufacture 4.8 million vials of H5 influenza vaccine in the coming months. The plans come three months into the H5N1 dairy outbreak, which is very far from the initial hopes of containment. [...] The more the virus expands its footprint across US dairy farms, adapts to its newfound mammalian host, and comes in contact with humans, the more and more chances it has to leap to humans and gain the ability to spread among us.
"The award made today is part of our longstanding commitment to strengthen our preparedness for pandemic influenza," said Dawn O'Connell, assistant secretary for Preparedness and Response. "Adding this technology to our pandemic flu toolkit enhances our ability to be nimble and quick against the circulating strains and their potential variants."

In a separate article, Ars Technica reports on a small study in Texas that suggests human cases are going undetected on dairy farms where the H5N1 virus has spread in cows.
Japan

Japan Mandates App To Ensure National ID Cards Aren't Forged (theregister.com) 34

The Japanese government has released details of an app that verifies the legitimacy of its troubled My Number Card -- a national identity document. From a report: Beginning in 2015, every resident of Japan was assigned a 12 digit My Number that paved the way for linking social security, taxation, disaster response and other government services to both the number itself and a smartcard. The plan was to banish bureaucracy and improve public service delivery -- but that didn't happen.

My Number Card ran afoul of data breaches, reports of malfunctioning card readers, and database snafus that linked cards to other citizens' bank accounts. Public trust in the scheme fell, and adoption stalled. Now, according to Japan's Digital Ministry, counterfeit cards are proliferating to help miscreant purchase goods -- particularly mobile phones -- under fake identities. Digital minister Taro Kono yesterday presented his solution to the counterfeits: a soon to be mandatory app that confirms the legitimacy of the card. The app uses the camera on a smartphone to read information printed on the card -- like date of birth and name. It compares those details to what it reads from info stored in the smartcard's resident chip, and confirms the data match without the user ever needing to enter their four-digit PIN.

The Courts

US Sues TikTok Over 'Massive-Scale' Privacy Violations of Kids Under 13 (reuters.com) 10

An anonymous reader quotes a report from Reuters: The U.S. Justice Department filed a lawsuit Friday against TikTok and parent company ByteDance for failing to protect children's privacy on the social media app as the Biden administration continues its crackdown on the social media site. The government said TikTok violated the Children's Online Privacy Protection Act that requires services aimed at children to obtain parental consent to collect personal information from users under age 13. The suit (PDF), which was joined by the Federal Trade Commission, said it was aimed at putting an end "to TikTok's unlawful massive-scale invasions of children's privacy." Representative Frank Pallone, the top Democrat on the Energy and Commerce Committee, said the suit "underscores the importance of divesting TikTok from Chinese Communist Party control. We simply cannot continue to allow our adversaries to harvest vast troves of Americans' sensitive data."

The DOJ said TikTok knowingly permitted children to create regular TikTok accounts, and then create and share short-form videos and messages with adults and others on the regular TikTok platform. TikTok collected personal information from these children without obtaining consent from their parents. The U.S. alleges that for years millions of American children under 13 have been using TikTok and the site "has been collecting and retaining children's personal information." The FTC is seeking penalties of up to $51,744 per violation per day from TikTok for improperly collecting data, which could theoretically total billions of dollars if TikTok were found liable.
TikTok said Friday it disagrees "with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform."
Government

Secret Service's Tech Issues Helped Shooter Go Undetected At Trump Rally (theguardian.com) 155

An anonymous reader quotes a report from The Guardian: The technology flaws of the U.S. Secret Service helped the gunman who attempted to assassinate Donald Trump during a rally in Butler, Pennsylvania, last month evade detection. An officer broadcast "long gun!" over the local law enforcement radio system, according to congressional testimony from the Secret Service this week, the New York Times reported. The radio message should have travelled to a command center shared between local police and the Secret Service, but the message was never received by the Secret Service. About 30 seconds later, the shooter, Thomas Crooks, fired his first shots.

It was one of several technology issues facing the Secret Service on 13 July due to either malfunction, improper deployment or the Secret Service opting not to utilize them. The Secret Service had also previously rejected requests from the Trump campaign for more resources over the past two years. The use of a surveillance drone was turned down by the Secret Service at the rally site and the agency also did not bring in a system to boost the signals of agents' devices as the area had poor cell service. And a system to detect drone use in the area by others did not work, according to the report in the New York Times, due to the communications network in the area being overwhelmed by the number of people gathered at the rally. The federal agency did not use technology it had to bolster their communications system. The shooter flew his own drone over the site for 11 minutes without being detected, about two hours before Trump appeared at the rally.
Ronald Rowe Jr, the acting Secret Service director, said it never utilized the technological tools that could have spotted the shooter beforehand.

A former Secret Service officer also told the New York Times he "resigned in 2017 over frustration with the agency's delays in evaluating new technology and getting clearance and funding to obtain it and then train officers on it," notes The Guardian. Furthermore, the Secret Service failed to record communications between federal and local law enforcement at the rally.
United Kingdom

UK Government Shelves $1.66 Billion Tech and AI Plans 35

An anonymous reader shares a report: The new Labour government has shelved $1.66 bn of funding promised by the Conservatives for tech and Artificial Intelligence (AI) projects, the BBC has learned. It includes $1 bn for the creation of an exascale supercomputer at Edinburgh University and a further $640m for AI Research Resource, which funds computing power for AI. Both funds were unveiled less than 12 months ago.

The Department for Science, Innovation and Technology (DSIT) said the money was promised by the previous administration but was never allocated in its budget. Some in the industry have criticised the government's decision. Tech business founder Barney Hussey-Yeo posted on X that reducing investment risked "pushing more entrepreneurs to the US." Businessman Chris van der Kuyl described the move as "idiotic." Trade body techUK said the government now needed to make "new proposals quickly" or the UK risked "losing out" to other countries in what are crucial industries of the future.
China

China's Wind and Solar Energy Surpass Coal In Historic First (oilprice.com) 95

According to China's National Energy Administration (NEA), wind and solar energy have collectively eclipsed coal in capacity for the first time ever. By 2026, analysts forecast solar power alone will surpass coal as the country's primary energy source, with a cumulative capacity exceeding 1.38 terawatts (TW) -- 150 gigawatts (GW) more than coal. Oil Pricereports: This shift stems from a growing emphasis on cleaner energy sources and a move away from fossil fuels for the nation. Despite coal's early advantage, with around 50 GW of annual installations before 2016, China has made substantial investments to expand its renewable energy infrastructure. Since 2020, annual installations of wind and solar energy have consistently exceeded 100 GW, three to four times the capacity additions for coal. This momentum has only gathered pace since then, with last year seeing China set a record with 293 GW of wind and solar installations, bolstered by gigawatt-scale renewable hub projects from the NEA's first and second batches connected to the country's grid.

China's coal power sector is moving in the opposite direction. Last year, approximately 40 GW of coal power was added, but this figure plummeted to 8 GW in the first half of 2024, according to our estimates. Despite the expansion of renewable energy under supportive policies, the government has implemented stricter restrictions on new coal projects to meet carbon reduction goals. Efforts are now focused on phasing out smaller coal plants, upgrading existing ones to reduce emissions and enforcing more stringent standards for new projects. As a result, the annual capacity addition gap between coal and clean energy has widened dramatically, reaching a 16-fold difference in the first half of 2024.

Government

US Progressives Push For Nvidia Antitrust Investigation (reuters.com) 42

Progressive groups and Senator Elizabeth Warren are urging the Department of Justice to investigate Nvidia for potential antitrust violations due to its dominant position in the AI chip market. The groups criticize Nvidia's bundling of software and hardware, claiming it stifles innovation and locks in customers. Reuters reports: Demand Progress and nine other groups wrote a letter (PDF) this week, opens new tab urging Department of Justice antitrust chief Jonathan Kanter to probe business practices at Nvidia, whose market value hit $3 trillion this summer on demand for chips able to run the complex models behind generative AI. The groups, which oppose monopolies and promote government oversight of tech companies, among other issues, took aim at Nvidia's bundling of software and hardware, a practice that French antitrust enforcers have flagged as they prepare to bring charges.

"This aggressively proprietary approach, which is strongly contrary to industry norms about collaboration and interoperability, acts to lock in customers and stifles innovation," the groups wrote. Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by cloud computing companies like Google, Microsoft and Amazon.com. The chips made by the cloud giants are not available for sale themselves but typically rented through each platform.
A spokesperson for Nvidia said: "Regulators need not be concerned, as we scrupulously adhere to all laws and ensure that NVIDIA is openly available in every cloud and on-prem for every enterprise. We'll continue to support aspiring innovators in every industry and market and are happy to provide any information regulators need."
Government

Senators Propose 'Digital Replication Right' For Likeness, Extending 70 Years After Death 46

An anonymous reader quotes a report from Ars Technica: On Wednesday, US Sens. Chris Coons (D-Del.), Marsha Blackburn (R.-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-NC) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024. The bipartisan legislation, up for consideration in the US Senate, aims to protect individuals from unauthorized AI-generated replicas of their voice or likeness. The NO FAKES Act would create legal recourse for people whose digital representations are created without consent. It would hold both individuals and companies liable for producing, hosting, or sharing these unauthorized digital replicas, including those created by generative AI. Due to generative AI technology that has become mainstream in the past two years, creating audio or image media fakes of people has become fairly trivial, with easy photorealistic video replicas likely next to arrive. [...]

To protect a person's digital likeness, the NO FAKES Act introduces a "digital replication right" that gives individuals exclusive control over the use of their voice or visual likeness in digital replicas. This right extends 10 years after death, with possible five-year extensions if actively used. It can be licensed during life and inherited after death, lasting up to 70 years after an individual's death. Along the way, the bill defines what it considers to be a "digital replica": "DIGITAL REPLICA.-The term "digital replica" means a newly created, computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that- (A) is embodied in a sound recording, image, audiovisual work, including an audiovisual work that does not have any accompanying sounds, or transmission- (i) in which the actual individual did not actually perform or appear; or (ii) that is a version of a sound recording, image, or audiovisual work in which the actual individual did perform or appear, in which the fundamental character of the performance or appearance has been materially altered; and (B) does not include the electronic reproduction, use of a sample of one sound recording or audiovisual work into another, remixing, mastering, or digital remastering of a sound recording or audiovisual work authorized by the copyright holder."
The NO FAKES Act "includes provisions that aim to balance IP protection with free speech," notes Ars. "It provides exclusions for recognized First Amendment protections, such as documentaries, biographical works, and content created for purposes of comment, criticism, or parody."
China

Germany Says China Was Behind a 2021 Cyberattack on Government Agency (apnews.com) 31

An investigation has determined that "Chinese state actors" were responsible for a 2021 cyberattack on Germany's national office for cartography, officials in Berlin said Wednesday. From a report: The Chinese ambassador was summoned to the Foreign Ministry for a protest for the first time in decades. Foreign Ministry spokesperson Sebastian Fischer said the German government has "reliable information from our intelligence services" about the source of the attack on the Federal Agency for Cartography and Geodesy, which he said was carried out "for the purpose of espionage."

"This serious cyberattack on a federal agency shows how big the danger is from Chinese cyberattacks and spying," Interior Minister Nancy Faeser said in a statement. "We call on China to refrain from and prevent such cyberattacks. These cyberattacks threaten the digital sovereignty of Germany and Europe." Fischer declined to elaborate on who exactly in China was responsible. He said a Chinese ambassador was last summoned to the German Foreign Ministry in 1989 after the Tiananmen Square crackdown.

Earth

Brazil's Radical Plan To Tax Global Super-Rich To Tackle Climate Crisis (theguardian.com) 167

An anonymous reader quotes a report from The Guardian: Proposals to slap a wealth tax on the world's super-rich could yield $250 billion a year to tackle the climate crisis and address poverty and inequality, but would affect only a small number of billionaire families, Brazil's climate chief has said. Ministers from the G20 group of the world's biggest developed and emerging economies are meeting in Rio de Janeiro this weekend, where Brazil's proposal for a 2% wealth tax on those with assets worth more than $1 billion is near the top of the agenda. No government was speaking out against the tax, said Ana Toni, who is national secretary for climate change in the government of President Luiz Inacio Lula da Silva. "Our feeling is that, morally, nobody's against," she told the Observer in an interview. "But the level of support from some countries is bigger than others."

However, the lack of overt opposition does not mean the tax proposal is likely to be approved. Many governments are privately skeptical but unwilling to publicly criticize a plan that would shave a tiny amount from the rapidly accumulating wealth of the planet's richest few, and raise money to address the pressing global climate emergency. Janet Yellen, the US Treasury secretary, told journalists in Rio that the US "did not see the need" for a global initiative. "People are not keen on global taxes," Toni admitted. "And there is a question over how you implement global taxes." But she said levying and raising a tax globally was possible, as had been shown by G7 finance ministers' agreement to levy a minimum 15% corporate tax. "It should be at a global level, because otherwise, obviously, rich people will move from one country to another," she said.

Only about 100 families around the world would be affected by the proposed 2% levy, she added. The world's richest 1% have added $42 trillion to their wealth in the past decade, roughly 36 times more than the bottom half of the world's population did. The question of how funds raised by such taxation should be spent had also not been settled, noted Toni. Some economists have argued that the idea was more likely to be accepted if the proceeds were devoted to solving the climate crisis than if they were used to address global inequality. Other experts say at least some of the money should be used for poverty alleviation.

Open Source

A New White House Report Embraces Open-Source AI 15

An anonymous reader quotes a report from ZDNet: According to a new statement, the White House realizes open source is key to artificial intelligence (AI) development -- much like many businesses using the technology. On Tuesday, the National Telecommunications and Information Administration (NTIA) issued a report supporting open-source and open models to promote innovation in AI while emphasizing the need for vigilant risk monitoring. The report recommends that the US continue to support AI openness while working on new capabilities to monitor potential AI risks but refrain from restricting the availability of open model weights.
Government

Senate Passes the Kids Online Safety Act (theverge.com) 84

An anonymous reader quotes a report from The Verge: The Senate passed the Kids Online Safety Act (KOSA) and the Children and Teens' Online Privacy Protection Act (also known as COPPA 2.0), the first major internet bills meant to protect children to reach that milestone in two decades. A legislative vehicle that included both KOSA and COPPA 2.0 passed 91-3. Senate Majority Leader Chuck Schumer (D-NY) called it "a momentous day" in a speech ahead of the vote, saying that "the Senate keeps its promise to every parent who's lost a child because of the risks of social media." He called for the House to pass the bills "as soon as they can."

KOSA is a landmark piece of legislation that a persistent group of parent advocates played a key role in pushing forward -- meeting with lawmakers, showing up at hearings with tech CEOs, and bringing along photos of their children, who, in many cases, died by suicide after experiencing cyberbullying or other harms from social media. These parents say that a bill like KOSA could have saved their own children from suffering and hope it will do the same for other children. The bill works by creating a duty of care for online platforms that are used by minors, requiring they take "reasonable" measures in how they design their products to mitigate a list of harms, including online bullying, sexual exploitation, drug promotion, and eating disorders. It specifies that the bill doesn't prevent platforms from letting minors search for any specific content or providing resources to mitigate any of the listed harms, "including evidence-informed information and clinical resources."
The legislation faces significant opposition from digital rights, free speech, and LGBTQ+ advocates who fear it could lead to censorship and privacy issues. Critics argue that the duty of care may result in aggressive content filtering and mandatory age verification, potentially blocking important educational and lifesaving content.

The bill may also face legal challenges from tech platforms citing First Amendment violations.

Slashdot Top Deals