Security

'Invasive' Iranian Intelligence Group Believed to Be The Ones Who Breached Trump's Campaign (reuters.com) 98

Reuters reports that the Iranian hacking team which compromised the campaign of U.S. presidential candidate Donald Trump "is known for placing surveillance software on the mobile phones of its victims, enabling them to record calls, steal texts and silently turn on cameras and microphones, according to researchers and experts who follow the group." Known as APT42 or CharmingKitten by the cybersecurity research community, the accused Iranian hackers are widely believed to be associated with an intelligence division inside Iran's military, known as the Intelligence Organization of the Islamic Revolutionary Guard Corps or IRGC-IO. Their appearance in the U.S. election is noteworthy, sources told Reuters, because of their invasive espionage approach against high-value targets in Washington and Israel. "What makes (APT42) incredibly dangerous is this idea that they are an organization that has a history of physically targeting people of interest," said John Hultquist, chief analyst with U.S. cybersecurity firm Mandiant, who referenced past research that found the group surveilling the cell phones of Iranian activists and protesters... Hultquist said the hackers commonly use mobile malware that allows them to "record phone calls, room audio recordings, pilfer SMS (text) inboxes, take images off of a machine," and gather geolocation data...

APT42 also commonly impersonates journalists and Washington think tanks in complex, email-based social engineering operations that aim to lure their targeting into opening booby-trapped messages, which let them takeover systems. The group's "credential phishing campaigns are highly targeted and well-researched; the group typically targets a small number of individuals," said Josh Miller, a threat analyst with email security company Proofpoint. They often target anti-Iran activists, reporters with access to sources inside Iran, Middle Eastern academics and foreign-policy advisers. This has included the hacking of western government officials and American defense contractors. For example, in 2018, the hackers targeted nuclear workers and U.S. Treasury department officials around the time the United States formally withdrew from the Joint Comprehensive Plan of Action (JCPOA), said Allison Wikoff, a senior cyber intelligence analyst with professional services company PricewaterhouseCoopers.

"APT42 is still actively targeting campaign officials and former Trump administration figures critical of Iran, according to a blog post by Google's cybersecurity research team."
The Military

Workers at Google DeepMind Push Company to Drop Military Contracts (time.com) 143

Nearly 200 Google DeepMind workers signed a letter urging Google to cease its military contracts, expressing concerns that the AI technology they develop is being used in warfare, which they believe violates Google's own AI ethics principles. "The letter is a sign of a growing dispute within Google between at least some workers in its AI division -- which has pledged to never work on military technology -- and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States," reports TIME Magazine. "The signatures represent some 5% of DeepMind's overall headcount -- a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand." From the report: The DeepMind letter, dated May 16 of this year, begins by stating that workers are "concerned by recent reports of Google's contracts with military organizations." It does not refer to any specific militaries by name -- saying "we emphasize that this letter is not about the geopolitics of any particular conflict." But it links out to an April report in TIME which revealed that Google has a direct contract to supply cloud computing and AI services to the Israeli Military Defense, under a wider contract with Israel called Project Nimbus. The letter also links to other stories alleging that the Israeli military uses AI to carry out mass surveillance and target selection for its bombing campaign in Gaza, and that Israeli weapons firms are required by the government to buy cloud services from Google and Amazon.

"Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles," the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause "overall harm," contribute to weapons or other technologies whose "principal purpose or implementation" is to cause injury, or build technologies "whose purpose contravenes widely accepted principles of international law and human rights.") The letter says its signatories are concerned with "ensuring that Google's AI Principles are upheld," and adds: "We believe [DeepMind's] leadership shares our concerns." [...]

The letter calls on DeepMind's leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter's circulation, Google has done none of those things, according to four people with knowledge of the matter. "We have received no meaningful response from leadership," one said, "and we are growing increasingly frustrated."

The Courts

US Sues Georgia Tech Over Alleged Cybersecurity Failings As a Pentagon Contractor (theregister.com) 37

The Register's Connor Jones reports: The U.S. is suing one of its leading research universities over a litany of alleged failures to meet cybersecurity standards set by the Department of Defense (DoD) for contract awardees. Georgia Institute of Technology (GIT), commonly referred to as Georgia Tech, and its contracting entity, Georgia Tech Research Corporation (GTRC), are being investigated following whistleblower reports from insiders Christopher Craig and Kyle Koza about alleged (PDF) failures to protect controlled unclassified information (CUI). The series of allegations date back to 2019 and continued for years after, although Koza was said to have identified the issues as early as 2018.

Among the allegations is the suggestion that between May 2019 and February 2020, Georgia Tech's Astrolavos Lab -- ironically a group that focuses on cybersecurity issues affecting national security -- failed to develop and implement a cybersecurity plan that complied with DoD standards (NIST 800-171). When the plan was implemented in February 2020, the lawsuit alleges that it wasn't properly scoped -- not all the necessary endpoints were included -- and that for years afterward, Georgia Tech failed to maintain that plan in line with regulations. Additionally, the Astrolavos Lab was accused of failing to implement anti-malware solutions across devices and the lab's network. The lawsuit alleges that the university approved the lab's refusal to deploy the anti-malware software "to satisfy the demands of the professor that headed the lab," the DoJ said. This is claimed to have occurred between May 2019 and December 2021. Refusing to install anti-malware solutions at a contractor like this is not allowed. In fact, it violates federal requirements and Georgia Tech's own policies, but allegedly happened anyway.

The university and the GTRC also, it is claimed, submitted a false cybersecurity assessment score in December 2020 -- a requirement for all DoD contractors to demonstrate they're meeting compliance standards. The two organizations are accused of issuing themselves a score of 98, which was later deemed to be fraudulent based on various factors. To summarize, the issue centers around the claim that the assessment was carried out on a "fictitious" environment, so on that basis the score wasn't given to a system related to the DoD contract, the US alleges. The claims are being made under the False Claims Act (FCA), which is being utilized by the Civil Cyber-Fraud Initiative (CCFI), which was introduced in 2021 to punish entities that knowingly risk the safety of United States IT systems. It's a first-of-its-kind case being pursued as part of the CCFI. All previous cases brought under the CCFI were settled before they reached the litigation stage.

Microsoft

Microsoft Plans Windows Security Overhaul After CrowdStrike Outage 63

Microsoft is stepping up its plans to make Windows more resilient to buggy software [non-paywalled source] after a botched CrowdStrike update took down millions of PCs and servers in a global IT outage. Financial Times: The tech giant has in the past month intensified talks with partners about adapting the security procedures around its operating system to better withstand the kind of software error that crashed 8.5mn Windows devices on July 19. Critics say that any changes by Microsoft would amount to a concession of shortcomings in Windows' handling of third-party security software that could have been addressed sooner.

Yet they would also prove controversial among security vendors that would have to make radical changes to their products, and force many Microsoft customers to adapt their software. Last month's outages -- which are estimated to have caused billions of dollars in damages after grounding thousands of flights and disrupting hospital appointments worldwide -- heightened scrutiny from regulators and business leaders over the extent of access that third-party software vendors have to the core, or kernel, of Windows operating systems. Microsoft will host a summit next month for government representatives and cyber security companies, including CrowdStrike, to discuss "improving resiliency and protecting mutual customers' critical infrastructure," Microsoft said on Friday.
Education

Fluoride At Twice the Recommended Limit Is Linked To Lower IQ In Kids (apnews.com) 153

An anonymous reader quotes a report from the Associated Press: A U.S. government report expected to stir debate concluded that fluoride in drinking water at twice the recommended limit is linked with lower IQ in children. The report, based on an analysis of previously published research, marks the first time a federal agency has determined -- "with moderate confidence" -- that there is a link between higher levels of fluoride exposure and lower IQ in kids. While the report was not designed to evaluate the health effects of fluoride in drinking water alone, it is a striking acknowledgment of a potential neurological risk from high levels of fluoride. Fluoride strengthens teeth and reduces cavities by replacing minerals lost during normal wear and tear, according to the U.S. Centers for Disease Control and Prevention. The addition of low levels of fluoride to drinking water has long been considered one of the greatest public health achievements of the last century.

The long-awaited report released Wednesday comes from the National Toxicology Program, part of the Department of Health and Human Services. It summarizes a review of studies, conducted in Canada, China, India, Iran, Pakistan, and Mexico, that concludes that drinking water containing more than 1.5 milligrams of fluoride per liter is consistently associated with lower IQs in kids. The report did not try to quantify exactly how many IQ points might be lost at different levels of fluoride exposure. But some of the studies reviewed in the report suggested IQ was 2 to 5 points lower in children who'd had higher exposures.

Since 2015, federal health officials have recommended a fluoridation level of 0.7 milligrams per liter of water, and for five decades before the recommended upper range was 1.2. The World Health Organization has set a safe limit for fluoride in drinking water of 1.5. The report said that about 0.6% of the U.S. population -- about 1.9 million people -- are on water systems with naturally occurring fluoride levels of 1.5 milligrams or higher. The 324-page report did not reach a conclusion about the risks of lower levels of fluoride, saying more study is needed. It also did not answer what high levels of fluoride might do to adults.

Google

Google Agrees To $250 Million Deal To Fund California Newsrooms, AI (politico.com) 33

Google has reached a groundbreaking deal with California lawmakers to contribute millions to local newsrooms, aiming to support journalism amid its decline as readers migrate online and advertising dollars evaporate. The agreement also includes a controversial provision for artificial intelligence funding. Politico reports: California emulated a strategy that other countries like Canada have used to try and reverse the journalism industry's decline as readership migrated online and advertising dollars evaporated. [...] Under the deal, the details of which were first reported by POLITICO on Monday, Google and the state of California would jointly contribute a minimum of $125 million over five years to support local newsrooms through a nonprofit public charity housed at UC Berkeley's journalism school. Google would contribute at least $55 million, and state officials would kick in at least $70 million. The search giant would also commit $50 million over five years to unspecified "existing journalism programs."

The deal would also steer millions in tax-exempt private dollars toward an artificial intelligence initiative that people familiar with the negotiations described as an effort to cultivate tech industry buy-in. Funding for artificial intelligence was not included in the bill at the core of negotiations, authored by Assemblymember Buffy Wicks. The agreement has drawn criticism from a journalists' union that had so far championed Wicks' effort. Media Guild of the West President Matt Pearce in an email to union members Sunday evening said such a deal would entrench "Google's monopoly power over our newsrooms."
"This public-private partnership builds on our long history of working with journalism and the local news ecosystem in our home state, while developing a national center of excellence on AI policy," said Kent Walker, chief legal officer for Alphabet, the parent company of Google.

Media Guild of the West President Matt Pearce wasn't so chipper. He criticized the plan in emails with union members, calling it a "total rout of the state's attempts to check Google's stranglehold over our newsrooms."
China

China Is Backing Off Coal Power Plant Approvals (apnews.com) 91

Approvals for new coal-fired power plants in China dropped by 80% in the first half of this year compared to last, according to an analysis from Greenpeace and the Shanghai Institutes for International Studies. The Associated Press reports: A review of project documents by Greenpeace East Asia found that 14 new coal plants were approved from January to June with a total capacity of 10.3 gigawatts, down 80% from 50.4 gigawatts in the first half of last year. Authorities approved 90.7 gigawatts in 2022 and 106.4 gigawatts in 2023, a surge that raised alarm among climate experts. China leads the world in solar and wind power installations but the government has said that coal plants are still needed for periods of peak demand because wind and solar power are less reliable. While China's grid gives priority to greener sources of energy, experts worry that it won't be easy for China to wean itself off coal once the new capacity is built.

"We may now be seeing a turning point," Gao Yuhe, the project lead for Greenpeace East Asia, said in a statement. "One question remains here. Are Chinese provinces slowing down coal approvals because they've already approved so many coal projects ...? Or are these the last gasps of coal power in an energy transition that has seen coal become increasingly impractical? Only time can tell." [...] Gao said that China should focus its resources on better connecting wind and solar power to the grid rather than building more coal power plants. Coal provides more than 60% of the country's electricity. "Coal plays a foundation role in China's energy security," Li Fulong, an official of National Energy Administration, said at a news conference in June.
The report notes that China is also looking to nuclear power to help reach its carbon reduction targets. The country approved five nuclear power projects on Monday with 11 units and a total cost of $28 billion.
AI

Wyoming Voters Face Mayoral Candidate Who Vows To Let AI Bot Run Government 51

An anonymous reader quotes a report from The Guardian: Voters in Wyoming's capital city on Tuesday are faced with deciding whether to elect a mayoral candidate who has proposed to let an artificial intelligence bot run the local government. Earlier this year, the candidate in question -- Victor Miller -- filed for him and his customized ChatGPT bot, named Vic (Virtual Integrated Citizen), to run for mayor of Cheyenne, Wyoming. He has vowed to helm the city's business with the AI bot if he wins. Miller has said that the bot is capable of processing vast amounts of data and making unbiased decisions. In what AI experts say is a first for US political campaigns, Miller and Vic have told local news outlets in interviews that their form of proposed governance is a "hybrid approach." The AI bot told Your Wyoming Link that its role would be to provide data-driven insights and innovative solutions for Cheyenne. Meanwhile, Vic said, the human elected office contender, Miller, would serve as the official mayor if chosen by voters and would ensure that "all actions are legally and practically executed."

"It's about blending AI's capabilities with human judgment to effectively lead Cheyenne," the bot said. The bot said it did not have political affiliations -- and its goal is to "focus on data-driven practical solutions that benefit the community." During a meet-and-greet this summer, the Washington Post reported that the AI bot was asked how it would go about making decisions "according to human factor, involving humans, and having to make a decision that affects so many people." "Making decisions that affect many people requires a careful balance of data-driven insights and human empathy," the AI bot responded, according to an audio recording obtained and published by the Washington Post. Vic then ran through a multi-part plan that suggested using AI technology to gather data on public opinion and feedback from the community, holding town hall meetings to listen to residents' concerns, consulting experts in relevant fields, evaluating the human impact of the decision and providing transparency about the decision-making. According to Wyoming Public Media, Miller has also pledged that he would donate half the mayoral salary to a non-profit if he is elected. The other half could be used to continually improve the AI bot, he said.
Miller has faced some pushback since announcing his mayoral campaign. Wyoming's Secretary of State, Chuck Gray, launched an investigation to determine if the AI bot could legally appear on the ballot, citing state law that says only real people that are registered to vote can run for office. City officials clarified that Miller is the actual candidate, so he was allowed to continue. However, Laramie County ruled that only Miller's name would appear on the ballot, not the bot's.

OpenAI later shut down Miller's account, but he quickly created a new one and continued his campaign.
Social Networks

India's Influencers Fear a New Law Could Make them Register with the Government (restofworld.org) 25

Indian influencers It's the largest country on earth — home to 1.4 billion people. But "The Indian government has plans to classify social media creators as 'digital news broadcasters,'" according to the nonprofit site RestofWorld.org.

While there's "no clarity" on the government's next move, the proposed legislation would require social media creators "to register with the government, set up a content evaluation committee that checks all content before it is published, and appoint complaint handlers — all at their own expense. Any failures in compliance could lead to criminal charges, including jail term." On July 26, the Hindustan Times reported that the government plans to tweak the proposed Broadcasting Services (Regulation) Bill, which aims to combine all regulations for broadcasters under one law. As per a new version of the bill, which has been reviewed by Rest of World, the government defines "digital news broadcaster" as "any person who broadcasts news and current affairs programs through an online paper, news portal, website, social media intermediary, or other similar medium as part of a systematic business, professional or commercial activity."

Creators and digital rights activists believe the potential legislation will tighten the government's grip over online content and threaten the last bastion of press freedom for independent journalists in the country. Over 785 Indian creators have sent a letter to the government seeking more transparency in the process of drafting the bill. Creators have also stormed social media with hashtags like #KillTheBill, and made videos to educate their followers about the proposal.

One YouTube creator told the site that if the government requires them to appoint a "grievance redressal officer," they might simply film themselves, responding to grievances — to "make content out of it".
Social Networks

41 Science Professionals Decry Harms and Mistrust Caused By COVID Lab Leak Claim (yahoo.com) 303

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, this week he covers new pushback on the COVID lab leak claim: Here's an indisputable fact about the theory that COVID originated in a laboratory: Most Americans believe it to be true. That's important for several reasons. One is that evidence to support the theory is nonexistent.

Another is that the claim itself has fomented a surge of attacks on science and scientists that threatens to drive promising researchers out of the crucial field of pandemic epidemiology. That concern was aired in a commentary by 41 biologists, immunologists, virologists and physicians published Aug. 1 in the Journal of Virology. The journal probably isn't in the libraries of ordinary readers, but the article's prose is commendably clear and its conclusions eye-opening. "The lab leak narrative fuels mistrust in science and public health infrastructures," the authors observe. "Scientists and public health professionals stand between us and pandemic pathogens; these individuals are essential for anticipating, discovering, and mitigating future pandemic threats. Yet, scientists and public health professionals have been harmed and their institutions have been damaged by the skewed public and political opinions stirred by continued promotion of the lab leak hypothesis in the absence of evidence...."

[O]ne can't advance the lab leak theory without positing a vast conspiracy encompassing scientists in China and the U.S., and Chinese and U.S. government officials. How else could all the evidence of a laboratory event that resulted in more than 7 million deaths worldwide be kept entirely suppressed for nearly five years... "Validating the lab leak hypothesis requires intelligence evidence that the WIV possessed or carried out work on a SARS-CoV-2 precursor virus prior to the pandemic," the Virology paper asserts. "Neither the scientific community nor multiple western intelligence agencies have found such evidence." Despite that, "the lab leak hypothesis receives persistent attention in the media, often without acknowledgment of the more solid evidence supporting zoonotic emergence," the paper says...

I've written before about the smears, physical harassment and baseless accusations of fraud and other wrongdoing that lab leak propagandists have visited upon scientists whose work has challenged their claims; similar attacks have targeted experts who have worked to debunk other anti-science narratives, including those about global warming and vaccines... What's notable about the Virology paper is that it represents a comprehensive and long-overdue pushback by the scientific community against such behavior. More to the point, it focuses on the consequences for public health and the scientific mission from the rise of anti-science propaganda... "Scientists have withdrawn from social media platforms, rejected opportunities to speak in public, and taken increased safety measures to protect themselves and their families," the authors report. "Some have even diverted their work to less controversial and less timely topics. We now see a long-term risk of having fewer experts engaged in work that may help thwart future pandemics...."

Thanks in part to social media, anti-science has become more virulent and widespread, the Virology authors write.

United States

Can the US Regulate Algorithm-Based Price Fixing on Rental Housing? (investopedia.com) 119

"Some corporate landlords collude with each other to set artificially high rental prices, often using algorithms and price-fixing software to do it."

That's a U.S. presidential candidate, speaking yesterday in North Carolina to warn that the practice "is anticompetitive, and it drives up costs. I will fight for a law that cracks down on these practices."

Ironically, it's a problem caused by technology that's impacting some of America's major tech-industry cities. Investopedia reports: Harris proposed a slate of policies aimed at curbing the high cost of housing, which many economists have traced to a long-standing shortage. The affordability situation for both renters and first-time buyers took a turn for the worse starting in 2020 when home prices and rents rose sharply. Harris's plan called for the construction of 3 million new houses to close the gap between how many homes exist in the country, and how many are needed, with the aim of evening out supply and demand and putting downward pressure on prices. This would be accomplished by offering tax incentives to builders for constructing starter homes, by funding local construction, and by cutting bureaucratic red tape that slows down construction projects. Harris would also help buyers out directly, through the first-time buyer credit.

For renters, Harris said she would crack down on companies that own many apartments, who she said have "colluded" to raise rents using pricing algorithms. She also called for a law blocking large investors from buying houses to rent out, a practice she said was driving up prices by competing with individual private buyers. Harris's focus on corporate crackdowns extended to the food business, where she called for a "federal ban on price gouging on food and groceries," without going into specifics about what exact behavior the ban would target.

Investopedia reminds readers that the executive branch is just one of three branches of the U.S. government: Should Harris win the 2024 election and become president, her ideas are still not guaranteed to be implemented, since many would require the support of Congress. Lawmakers are currently divided with Republicans controlling the House of Representatives and Democrats in control of the Senate.
Sci-Fi

An Insider's Perspective Into the Pentagon's UFO Hunt (nytimes.com) 123

In his new memoir, Imminent, former senior intelligence official Luis Elizondo claims that a supersecret program has been retrieving technology and biological remains of nonhuman origin for decades, warning that these phenomena could pose a serious national security threat or even an existential threat to humanity. The New York Times reports: Luis Elizondo made headlines in 2017 when he resigned as a senior intelligence official running a shadowy Pentagon program investigating U.F.O.s and publicly denounced the excessive secrecy, lack of resources and internal opposition that he said were thwarting the effort. Elizondo's disclosures at the time created a sensation. They were buttressed by explosive videos and testimony from Navy pilots who had encountered unexplained aerial phenomena, and led to congressional inquiries, legislation and a 2023 House hearing in which a former U.S. intelligence official testified that the federal government has retrieved crashed objects of nonhuman origin.

Now Elizondo, 52, has gone further in a new memoir. In the book he asserted that a decades-long U.F.O. crash retrieval program has been operating as a supersecret umbrella group made up of government officials working with defense and aerospace contractors. Over the years, he wrote, technology and biological remains of nonhuman origin have been retrieved from these crashes. "Humanity is, in fact, not the only intelligent life in the universe, and not the alpha species," Elizondo wrote. The book, "Imminent: Inside the Pentagon's Hunt for U.F.O.s," is being published by HarperCollins on Aug. 20 after a yearlong security review by the Pentagon.

Transportation

US Presses the 'Reset Button' On Technology That Lets Cars Talk To Each Other (npr.org) 95

An anonymous reader quotes a report from NPR: Safety advocates have been touting the potential of technology that allows vehicles to communicate wirelessly for years. So far, the rollout has been slow and uneven. Now the U.S. Department of Transportation is releasing a roadmap it hopes will speed up deployment of that technology -- and save thousands of lives in the process. "This is proven technology that works," Shailen Bhatt, head of the Federal Highway Administration, said at an event Friday to mark the release of the deployment plan (PDF) for vehicle-to-everything, or V2X, technology across U.S. roads and highways. V2X allows cars and trucks to exchange location information with each other, and potentially cyclists and pedestrians, as well as with the roadway infrastructure itself. Users could send and receive frequent messages to and from each other, continuously sharing information about speed, position, and road conditions -- even in situations with poor visibility, including around corners or in dense fog or heavy rain. [...]

Despite enthusiasm from safety advocates and federal regulators, the technology has faced a bumpy rollout. During the Obama administration, the National Highway Traffic Safety Administration proposed making the technology mandatory on cars and light trucks. But the agency later dropped that idea during the Trump administration. The deployment of V2X has been "hampered by regulatory uncertainty," said John Bozzella, president and CEO of the Alliance for Automotive Innovation, a trade group that represents automakers. But he's optimistic that the new plan will help. "This is the reset button," Bozzella said at Friday's announcement. "This deployment plan is a big deal. It is a crucial piece of this V2X puzzle." The plan lays out some goals and targets for the new technology. In the short-term, the plan aims to have V2X infrastructure in place on 20% of the National Highway System by 2028, and for 25% of the nation's largest metro areas to have V2X enabled at signalized intersections. V2X technology still faces some daunting questions, including how to pay for the rollout of critical infrastructure and how to protect connected vehicles from cyberattack. But safety advocates say it's past time to find the answers.

AI

California Weakens Bill To Prevent AI Disasters Before Final Vote (techcrunch.com) 36

An anonymous reader shares a report: California's bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. California lawmakers bent slightly to that pressure Thursday, adding in several amendments suggested by AI firm Anthropic and other opponents. On Thursday the bill passed through California's Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener's office told TechCrunch.

[...] SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

China

China-Linked Hackers Could Be Behind Cyberattacks On Russian State Agencies, Researchers Say (therecord.media) 46

According to Kaspersky, hackers linked to Chinese threat actors have targeted Russian state agencies and tech companies in a campaign named EastWind. The Record reports: [T]he attackers used the GrewApacha remote access trojan (RAT), an unknown PlugY backdoor and an updated version of CloudSorcerer malware, which was previously used to spy on Russian organizations. The GrewApacha RAT has been used by the Beijing-linked hacking group APT31 since at least 2021, the researchers said, while PlugY shares many similarities with tools used by the suspected Chinese threat actor known as APT27.

According to Kaspersky, the hackers sent phishing emails containing malicious archives. In the first stage of the attack, they exploited a dynamic link library (DLL), commonly found in Windows computers, to collect information about the infected devices and load the additional malicious tools. While Kaspersky didn't explicitly attribute the recent attacks to APT31 or APT27, they highlighted links between the tools that were used. Although PlugY malware is still being analyzed, it is highly likely that it was developed using the DRBControl backdoor code, the researchers said. This backdoor was previously linked to APT27 and bears similarities to PlugX malware, another tool typically used by hackers based in China.

Earth

Climate Activists Stop Air Traffic After Breaking Into Four Airport Sites 94

Climate activists have broken into four German airport sites, briefly bringing air traffic to a halt at two of those before police made arrests. From a report: Protesters from Letzte Generation -- Germany's equivalent to Just Stop Oil -- gained access on Thursday to airfields in areas near the takeoff and landing strips of Cologne-Bonn, Nuremberg, Berlin Brandenburg and Stuttgart airports at dawn. Air traffic was suspended for a short time at Nuremberg and Cologne-Bonn due to police operations. The activists cut holes in fences with bolt cutters, glued themselves to the asphalt and unfurled banners reading "Oil kills" and "Sign the treaty," in reference to Letzte Generation's demand that the German government negotiate and sign an agreement for an international ban on the use of oil, gas and coal by 2030.

The action was reminiscent of similar protests this summer and followed raids carried out a week ago on the homes of climate activists in five German cities, at which police collected DNA samples, in what Letzte Generation called "an attempt at intimidation." The interior minister, Nancy Faeser, condemned the protest and called for anyone convicted of involvement in Thursday's action to be given prison sentences. She wrote: "These criminal actions are dangerous and stupid. These anarchists are risking not only their own lives, but are also endangering others. We have recommended tough prison sentences. And we obligate airports to secure their facilities significantly better."
Government

FTC Finalizes Rule Banning Fake Reviews, Including Those Made With AI (techcrunch.com) 35

TechCrunch's Lauren Forristal reports: The U.S. Federal Trade Commission (FTC) announced on Wednesday a final rule that will tackle several types of fake reviews and prohibit marketers from using deceptive practices, such as AI-generated reviews, censoring honest negative reviews and compensating third parties for positive reviews. The decision was the result of a 5-to-0 vote. The new rule will start being enforced 60 days after it's published in the official government publication called Federal Register. [...]

According to the final rule, the maximum civil penalty for fake reviews is $51,744 per violation. However, the courts could impose lower penalties depending on the specific case. "Ultimately, courts will also decide how to calculate the number of violations in a given case," the Commission wrote. [...] The FTC initially proposed the rule on June 30, 2023, following an advanced notice of proposed rulemaking issued in November 2022. You can read the finalized rule here (PDF), but we also included a summary of it below:

- No fake or disingenuous reviews. This includes AI-generated reviews and reviews from anyone who doesn't have experience with the actual product.
- Businesses can't sell or buy reviews, whether negative or positive.
- Company insiders writing reviews need to clearly disclose their connection to the business. Officers or managers are prohibited from giving testimonials and can't ask employees to solicit reviews from relatives.
- Company-controlled review websites that claim to be independent aren't allowed.
- No using legal threats, physical threats or intimidation to forcefully delete or prevent negative reviews. Businesses also can't misrepresent that the review portion of their website comprises all or most of the reviews when it's suppressing the negative ones.
- No selling or buying fake engagement like social media followers, likes or views obtained through bots or hacked accounts.

Apple

Apple To Open Payment Chip To Third Parties and Charge Fees (financialpost.com) 37

Apple will begin letting third parties use the iPhone's payment chip to handle transactions, a move that allows banks and other services to compete with the Apple Pay platform. From a report: The move, announced Wednesday, follows years of pressure from regulators, including those in the European Union. Apple said it will allow developers to use the component starting in iOS 18.1, an upcoming software update for the iPhone. The payment chip relies on a technology called NFC, or near-field communication, to share information when the phone is near another device.

The change will allow outside providers to use the NFC chip for in-store payments, transit system fares, work badges, home and hotel keys, and reward cards. Support for government identification cards will come later, the company said. Users will also be able to set a third-party payment app as their default system, replacing Apple Pay. Apple had been reluctant to open up the chip to developers, citing security concerns. The change also threatens the revenue it generates from Apple Pay transactions. The company takes a cut of all payments made via the iPhone.

Google

US Considers a Rare Antitrust Move: Breaking Up Google (bloomberg.com) 87

A rare bid to break up Alphabet's Google is one of the options being considered by the Justice Department after a landmark court ruling found that the company monopolized the online search market, Bloomberg News reported Tuesday, citing sources familiar with the matter. From the report: The move would be Washington's first push to dismantle a company for illegal monopolization since unsuccessful efforts to break up Microsoft two decades ago.

Less severe options include forcing Google to share more data with competitors and measures to prevent it from gaining an unfair advantage in AI products, said the people, who asked not to be identified discussing private conversations. Regardless, the government will likely seek a ban on the type of exclusive contracts that were at the center of its case against Google. If the Justice Department pushes ahead with a breakup plan, the most likely units for divestment are the Android operating system and Google's web browser Chrome, said the people. Officials are also looking at trying to force a possible sale of AdWords, the platform the company uses to sell text advertising, one of the people said.

Android

Google Wallet Widely Rolling Out 'Everything Else' Pass Creator In the US (9to5google.com) 18

Google is rolling out a new feature for Google Wallet that uses AI to generate a digital version of IDs, tickets, and other passes. "Replacing the old 'Photo' option, Everything else lets you 'Scan a photo of any pass like an event ticket, gym membership, insurance card, and more' to create a digital version that appears in Google Wallet," writes 9to5Google's Abner Li. "The app explains how AI is leveraged to 'determine what kind of pass you're adding and to suggest the content of the pass.'" From the report: If you're adding something sensitive with health or government ID information, it will be classified as private and not get synced to other devices, while authentication is required before opening. However, you can change the private pass classification later. After taking a picture of the pass, Google will extract the information and let you edit common fields, as well as add your own. At this stage, you can change the pass type [...]. When finalized, it will appear below your carousel of credit/debit cards. Google will let you access the original "Pass photos" when viewing the digital copy.

Slashdot Top Deals