Microsoft

Linux 6.14 Adds Support For The Microsoft Copilot Key Found On New Laptops (phoronix.com) 35

The Linux 6.14 kernel now maps out support for Microsoft's "Copilot" key "so that user-space software can determine the behavior for handling that key's action on the Linux desktop," writes Phoronix's Michael Larabel. From the report: A change made to the atkbd keyboard driver on Linux now maps the F23 key to support the default copilot shortcut action. The patch authored by Lenovo engineer Mark Pearson explains [...]. Now it's up to the Linux desktop environments for determining what to do if the new Copilot key is pressed. The patch was part of the input updates now merged for the Linux 6.14 kernel.
AI

OpenAI Unveils AI Agent To Automate Web Browsing Tasks (openai.com) 41

The rumors are true: OpenAI today launched Operator, an AI agent capable of performing web-based tasks through its own browser, as a research preview for U.S. subscribers of its $200 monthly ChatGPT Pro tier. The agent uses GPT-4's vision capabilities and reinforcement learning to interact with websites through mouse and keyboard actions without requiring API integration, OpenAI said in a blog post.

Operator can self-correct and defer to users for sensitive information though there are some limitations with complex interfaces. OpenAI said it's partnering with DoorDash, Instacart, OpenTable and others to develop real-world applications, with plans to expand access to Plus, Team and Enterprise users.

Check out our list of the best AI web browsing agents.
AI

OpenAI's Stargate Deal Heralds Shift Away From Microsoft 38

Microsoft's absence from OpenAI's Stargate announcement follows months of tension between the companies and signals a new era in which the longtime partners will be less reliant on each other. From a report: At a White House press conference, the ChatGPT maker announced Stargate, a venture with Oracle and tech investor SoftBank. The new company plans to spend up to $500 billion building new data centers in the U.S. to help power OpenAI's development.

The assembled leaders -- OpenAI's Sam Altman, Oracle's Larry Ellison, SoftBank's Masayoshi Son and President Trump -- discussed how AI could create jobs and even cure cancer. Microsoft CEO Satya Nadella was thousands of miles away, at the World Economic Forum in Davos, Switzerland. The developments show how the OpenAI-Microsoft partnership that helped trigger the generative-AI boom is drifting apart as each company focuses on its own evolving needs.

In the months leading up to the announcement, the two sides had been haggling over what to do about OpenAI's seemingly insatiable appetite for computing power and its contention Microsoft couldn't fulfill it even though their agreement didn't allow OpenAI to easily switch to others, said people familiar with the discussions. OpenAI is almost entirely reliant on Microsoft to provide it with the data centers it needs to build and operate its sophisticated AI software. That has been part of their agreement since Microsoft first invested in 2019. With the success of ChatGPT, OpenAI's need for computing power surged. Its executives have said ending the exclusive cloud contract could be crucial to compete with rival AI developers that don't have the same constraints.
AI

macOS Sequoia 15.3 and iOS 18.3 Enable Apple Intelligence Automatically 55

Apple's upcoming updates -- macOS Sequoia 15.3, iOS 18.3, and iPadOS 18.3 -- will enable Apple Intelligence by default on compatible devices, requiring users to manually disable it if undesired. From Apple's developer release notes: "For users new or upgrading to iOS 18.3, Apple Intelligence will be enabled automatically during iPhone onboarding. Users will have access to Apple Intelligence features after setting up their devices. To disable Apple Intelligence, users will need to navigate to the Apple Intelligence & Siri Settings pane and turn off the Apple Intelligence toggle. This will disable Apple Intelligence features on their device." MacRumors reports: With macOS Sequoia 15.1, macOS Sequoia 15.2, iOS 18.1, and iOS 18.2, Apple Intelligence was opt-in rather than opt-out, and users who wanted the feature needed to turn it on in the Settings app. Going forward, it will be enabled by default, and Mac, iPhone, and iPad users who do not want to use the feature will need to turn it off. The report notes that macOS Sequoia 15.3 introduces Genmoji, allowing Mac users to create custom emoji characters, and enhances Notification summaries with clearer indicators for AI-generated information.

Public releases of this and other software updates are expected next week, following today's release candidate versions.
Games

EA's Origin App For PC Gaming Will Shut Down In April 17

EA's Origin PC client will be shut down on April 17, 2025, as Microsoft ends support for 32-bit software. "Anyone still using Origin will need to swap over to the EA app before that date," adds Engadget. From the report: For those PC players who have not migrated over to the EA app, the company has an FAQ explaining the latest system requirements. The EA app runs on 64-bit architecture, and requires a machine using Windows 10 or Windows 11. [...] If you're simply downloading the EA app on a current machine, players won't need to re-download their games. And if you have cloud saves enabled, all of your data should transfer without any additional steps.

However, it's always a good idea to have physical backups with this type of transition, especially since not all games support cloud saves, and those titles will need to have saved game data manually transferred. Mods also may not automatically make the switch, and EA recommends players check with mod creators about transferring to the EA app.
AI

AI Boom Gives Rise To 'GPU-as-a-Service' 35

An anonymous reader quotes a report from IEEE Spectrum: The surge of interest in AI is creating a massive demand for computing power. Around the world, companies are trying to keep up with the vast amount of GPUs needed to power more and more advanced AI models. While GPUs are not the only option for running an AI model, they have become the hardware of choice due to their ability to efficiently handle multiple operations simultaneously -- a critical feature when developing deep learning models. But not every AI startup has the capital to invest in the huge numbers of GPUs now required to run a cutting-edge model. For some, it's a better deal to outsource it. This has led to the rise of a new business: GPU-as-a-Service (GPUaaS). In recent years, companies like Hyperbolic, Kinesis, Runpod, and Vast.ai have sprouted up to remotely offer their clients the needed processing power.

[...] Studies have shown that more than half of the existing GPUs are not in use at any given time. Whether we're talking personal computers or colossal server farms, a lot of processing capacity is under-utilized. What Kinesis does is identify idle compute -- both for GPUs and CPUs -- in servers worldwide and compile them into a single computing source for companies to use. Kinesis partners with universities, data centers, companies, and individuals who are willing to sell their unused computing power. Through a special software installed on their servers, Kinesis detects idle processing units, preps them, and offers them to their clients for temporary use. [...] The biggest advantage of GPUaaS is economical. By removing the need to purchase and maintain the physical infrastructure, it allows companies to avoid investing in servers and IT management, and to instead put their resources toward improving their own deep learning, large language, and large vision models. It also lets customers pay for the exact amount of GPUs they use, saving the costs of the inevitable idle compute that would come with their own servers.
The report notes that GPUaaS is growing in profitability. "In 2023, the industry's market size was valued at US $3.23 billion; in 2024, it grew to $4.31 billion," reports IEEE. "It's expected to rise to $49.84 billion by 2032."
Security

Employees of Failed Startups Are at Special Risk of Stolen Personal Data Through Old Google Logins (techcrunch.com) 7

Hackers could steal sensitive personal data from former startup employees by exploiting abandoned company domains and Google login systems, security researcher Dylan Ayrey revealed at ShmooCon conference. The vulnerability particularly affects startups that relied on "Sign in with Google" features for their business software.

Ayrey, CEO of Truffle Security, demonstrated the flaw by purchasing one failed startup's domain and accessing ChatGPT, Slack, Notion, Zoom and an HR system containing Social Security numbers. His research found 116,000 website domains from failed tech startups currently available for sale. While Google offers preventive measures through its OAuth "sub-identifier" system, some providers avoid it due to reliability concerns - which Google disputes. The company initially dismissed Ayrey's finding as a fraud issue before reversing course and awarding him a $1,337 bounty. Google has since updated its documentation but hasn't implemented a technical fix, TechCrunch reports.
United States

The Pentagon Says AI is Speeding Up Its 'Kill Chain' 34

An anonymous reader shares a report: Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

"We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces," said Plumb. The "kill chain" refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb. The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans. "We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.
Linux

Linux 6.13 Released (phoronix.com) 25

"Nothing horrible or unexpected happened last week," Linux Torvalds posted tonight on the Linux kernel mailing list, "so I've tagged and pushed out the final 6.13 release."

Phoronix says the release has "plenty of fine features": Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors. The new AMD EPYC 9005 "Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency....

Linux 6.13 also brings more Rust programming language infrastructure and more.

Phoronix notes that Linux 6.13 also brings "the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler."

And some lucky Linux kernel developers will also be getting a guitar pedal soldered by Linus Torvalds himself, thanks to a generous offer he announced a week ago: For _me_ a traditional holiday activity tends to be a LEGO build or two, since that's often part of the presents... But in addition to the LEGO builds, this year I also ended up doing a number of guitar pedal kit builds ("LEGO for grown-ups with a soldering iron"). Not because I play guitar, but because I enjoy the tinkering, and the guitar pedals actually do something and are the right kind of "not very complex, but not some 5-minute 555 LED blinking thing"...

[S]ince I don't actually have any _use_ for the resulting pedals (I've already foisted off a few only unsuspecting victims^Hfriends), I decided that I'm going to see if some hapless kernel developer would want one.... as an admittedly pretty weak excuse to keep buying and building kits...

"It may be worth noting that while I've had good success so far, I'm a software person with a soldering iron. You have been warned... [Y]ou should set your expectations along the lines of 'quality kit built by a SW person who doesn't know one end of a guitar from the other.'"
Google

Google Upgrades Open Source Vulnerability Scanning Tool with SCA Scanning Library (googleblog.com) 2

In 2022 Google released a tool to easily scan for vulnerabilities in dependencies named OSV-Scanner. "Together with the open source community, we've continued to build this tool, adding remediation features," according to Google's security blog, "as well as expanding ecosystem support to 11 programming languages and 20 package manager formats... Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities..."

Thursday they also announced an extensible library for "software composition analysis" scanning (as well as file-system scanning) named OSV-SCALIBR (Open Source Vulnerability — Software Composition Analysis LIBRary). The new library "combines Google's internal vulnerability management expertise into one scanning library with significant new capabilities such as:
  • Software composition analysis for installed packages, standalone binaries, as well as source code
  • OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac
  • Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more)
  • Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac
  • Software Bill of Materials (SBOM) generation in SPDX and CycloneDX, the two most popular document formats
  • Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical

"OSV-SCALIBR is now the primary software composition analysis engine used within Google for live hosts, code repos, and containers. It's been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users' data at Google scale. We offer OSV-SCALIBR primarily as an open source Go library today, and we're working on adding its new capabilities into OSV-Scanner as the primary CLI interface."


Printer

Proposed New York Law Could Require Background Checks Before Buying 3D Printers (news10.com) 225

A new law is being considered by New York's state legislature, reports a local news outlet, which "if passed, will require anyone buying a 3D printer to pass a background check. If you can't legally own a firearm, you won't be able to buy one of these printers..." It is illegal to print most gun parts in New York. Attorney Greg Rinckey believes the proposal is an overreach. "I think this is also gonna face some constitutional problems. I mean, it really comes down to a legal parsing of what are you printing and at what point is it technically a firearm...?"

[Ascent Fabrication owner Joe] Fairley thinks lawmakers should shift their focus on those partial gun kits that produce the metal firing components. Another possibility is to require printer manufacturers to install software that prevents gun parts from being printed. "They would need to agree on some algorithm to look at the part and say nope, that is a gun component, you're not allowed to print that part somehow," said Fairley. "But I feel like it would be extremely difficult to get to that point."

AI

Arrested by AI: When Police Ignored Standards After AI Facial-Recognition Matches (msn.com) 55

A county transit police detective fed a poor-quality image to an AI-powered facial recognition program, remembers the Washington Post, leading to the arrest of "Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses." He was unable to post the $75,000 cash bond required, and "jailed for a crime he says he didn't commit, it would take Gatlin more than two years to clear his name." A Washington Post investigation into police use of facial recognition software found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence... The Post reviewed documents from 23 police departments where detailed records about facial recognition use are available and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime — in most cases contradicting their own internal policies requiring officers to corroborate all leads found through AI. Some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts, The Post found. One police report referred to an uncorroborated AI result as a "100% match." Another said police used the software to "immediately and unquestionably" identify a suspected thief.

Gatlin is one of at least eight people wrongfully arrested in the United States after being identified through facial recognition... All of the cases were eventually dismissed. Police probably could have eliminated most of the people as suspects before their arrest through basic police work, such as checking alibis, comparing tattoos, or, in one case, following DNA and fingerprint evidence left at the scene.

Some statistics from the article about the eight wrongfully-arrested people:
  • In six cases police failed to check alibis
  • In two cases police ignored evidence that contradicted their theory
  • In five cases police failed to collect key pieces of evidence
  • In three cases police ignored suspects' physical characteristics
  • In six cases police relied on problematic witness statements

The article provides two examples of police departments forced to pay $300,000 settlements after wrongful arrests caused by AI mismatches. But "In interviews with The Post, all eight people known to have been wrongly arrested said the experience had left permanent scars: lost jobs, damaged relationships, missed payments on car and home loans. Some said they had to send their children to counseling to work through the trauma of watching their mother or father get arrested on the front lawn.

"Most said they also developed a fear of police."


AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
AI

AI Tools Crack Down on Wall Street Trader Code Speak (msn.com) 21

Compliance software firms are deploying AI to decode complex trader communications and detect potential financial crimes as Wall Street and London regulators intensify scrutiny of market manipulation.

Companies like Behavox and Global Relay are developing AI tools that can interpret trader slang, emoji-laden messages and even coded language that traditional detection systems might miss, WSJ reports. The technology aims to replace older methods that relied on scanning for specific trigger words, which traders could easily evade. The story adds: Traders believed that "if somebody wanted to say something sketchy, they would just make up a funny word or, you know, spell it backward or something," [Donald] McElligott (VP of Global Relay) said. "Now, none of that"s going to work anymore."
Transportation

Toyota Unit Hino Motors Reaches $1.6 Billion US Diesel Emissions Settlement (msn.com) 8

An anonymous reader quotes a report from Reuters: Toyota Motor unit Hino Motors has agreed a $1.6 billion settlement with U.S. agencies and will plead guilty over excess diesel engine emissions in more than 105,000 U.S. vehicles, the company and U.S. government said on Wednesday. The Japanese truck and engine manufacturer was charged with fraud in U.S. District Court in Detroit for unlawfully selling 105,000 heavy-duty diesel engines in the United States from 2010 through 2022 that did not meet emissions standards. The settlement, which still must be approved by a U.S. judge, includes a criminal penalty of $521.76 million, $442.5 million in civil penalties to U.S. authorities and $236.5 million to California.

A company-commissioned panel said in a report in 2022 Hino had falsified emissions data on some engines going back to at least 2003. Hino agreed to plead guilty to engaging in a multi-year criminal conspiracy and serve a five-year term of probation, during which it will be barred from importing any diesel engines it has manufactured into the U.S., and carry out a comprehensive compliance and ethics program, the Justice Department and Environmental Protection Agency said. [...] The settlement includes a mitigation program, valued at $155 million, to offset excess air emissions from the violations by replacing marine and locomotive engines, and a recall program, valued at $144.2 million, to fix engines in 2017-2019 heavy-duty trucks

The EPA said Hino admitted that between 2010 and 2019, it submitted false applications for engine certification approvals and altered emission test data, conducted tests improperly and fabricated data without conducting any underlying tests. Hino President Satoshi Ogiso said the company had improved its internal culture, oversight and compliance practices. "This resolution is a significant milestone toward resolving legacy issues that we have worked hard to ensure are no longer a part of Hino's operations or culture," he said in a statement.
Toyota's Hino Motors isn't the only automaker to admit to selling vehicles with excess diesel emissions. Volkswagen had to pay billions in fines after it admitted in 2015 to cheating emissions tests by installing "defeat devices" and sophisticated software in nearly 11 million vehicles worldwide. Daimler (Mercedes-Benz), BMW, Opel/Vauxhall (General Motors), and Fiat Chrysler have been implicated in similar practices.
AI

Apple Pulls AI-Generated Notifications For News After Generating Fake Headlines 20

An anonymous reader quotes a report from CNN: Apple is temporarily pulling its newly introduced artificial intelligence feature that summarizes news notifications after it repeatedly sent users error-filled headlines, sparking backlash from a news organization and press freedom groups. The rare reversal from the iPhone maker on its heavily marketed Apple Intelligence feature comes after the technology produced misleading or altogether false summaries of news headlines that appear almost identical to regular push notifications.

On Thursday, Apple deployed a beta software update to developers that disabled the AI feature for news and entertainment headlines, which it plans to later roll out to all users while it works to improve the AI feature. The company plans to re-enable the feature in a future update. As part of the update, the company said the Apple Intelligence summaries, which users must opt into, will more explicitly emphasize that the information has been produced by AI, signaling that it may sometimes produce inaccurate results.
AI

AI Slashes Google's Code Migration Time By Half (theregister.com) 74

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.
Microsoft

Microsoft Patches Windows To Eliminate Secure Boot Bypass Threat (arstechnica.com) 39

Microsoft has patched a Windows vulnerability that allowed attackers to bypass Secure Boot, a critical defense against firmware infections, the company said. The flaw, tracked as CVE-2024-7344, affected Windows devices for at least seven months. Security researcher Martin Smolar discovered the vulnerability in a signed UEFI application within system recovery software from seven vendors, including Howyar.

The application, reloader.efi, circumvented standard security checks through a custom PE loader. Administrative attackers could exploit the vulnerability to install malicious firmware that persists even after disk reformatting. Microsoft revoked the application's digital signature, though the vulnerability's impact on Linux systems remains unclear.

Slashdot Top Deals