Programming

Should We Sing the Praises of Agile, or Bury It? (acm.org) 235

"Stakeholders must be included" throughout an agile project "to ensure the evolving deliverables meet their expectations," according to an article this week in Communications of the ACM.

But long-time Slashdot reader theodp complains it's a "gushing how-to-make-Agile-even-better opinion piece." Like other pieces by Agile advocates, it's long on accolades for Agile, but short on hard evidence justifying why exactly Agile project management "has emerged as a critical component for firms looking to improve project delivery speed and flexibility" and the use of Agile approaches is being expanded across other departments beyond software development. Indeed, among the three examples of success offered in the piece to "highlight the effectiveness of agile methods in navigating complex stakeholder dynamics and achieving project success" is Atlassian's use of agile practices to market and develop its products, many of which are coincidentally designed to support Agile practices and teams (including Jira). How meta.

Citing "recent studies," the piece concludes its call for stakeholder engagement by noting that "59% of organizations measure Agile success by customer or user satisfaction." But that is one of those metrics that can create perverse incentives. Empirical studies of user satisfaction and engagement have been published since the 1970's, and sadly one of the cruel lessons learned from them is that the easiest path to having satisfied users is to avoid working on difficult problems. Keep that in mind when you ponder why difficult user stories seem to languish forever in the Kanban and Scrum Board "Ice Box" column, while the "Complete" column is filled with low-hanging fruit. Sometimes success does come easy!

So, are you in the Agile-is-Heaven or Agile-is-Hell camp?

Programming

Slashdot Asks: Do You Remember Your High School's 'Computer Room'? (gatesnotes.com) 192

Bill Gates' blog has been updated with short videos about his upcoming book, including one about how his school ended up with an ASR-33 teletype that could connect their Seattle classroom to a computer in California. "The teachers faded away pretty quickly," Gates adds, "But about six of us stayed hardcore. One was Paul Allen..." — the future co-founder of Microsoft. And the experience clearly meant a lot to Gates. "Microsoft just never would've happened without Paul — and this teletype room."

In a longer post thanking his "brilliant" teachers, Gates calls his teletype experience "an encounter that would shape my entire future" and "opened up a whole new world for me." Gates also thanks World War II Navy pilot and Boeing engineer Bill Dougall, who "was instrumental in bringing computer access to our school, something he and other faculty members pushed for after taking a summer computer class... The fascinating thing about Mr. Dougall was that he didn't actually know much about programming; he exhausted his knowledge within a week. But he had the vision to know it was important and the trust to let us students figure it out."

Gates shared a similar memory about the computer-room's 20-something overseer Fred Wright, who "intuitively understood that the best way to get students to learn was to let us explore on our own terms. There was no sign-up sheet, no locked door, no formal instruction." Instead, Mr. Wright let us figure things out ourselves and trusted that, without his guidance, we'd have to get creative... Some of the other teachers argued for tighter regulations, worried about what we might be doing in there unsupervised. But even though Mr. Wright occasionally popped in to break up a squabble or listen as someone explained their latest program, for the most part he defended our autonomy...

Mr. Wright gave us something invaluable: the space to discover our own potential.

Any Slashdot readers have a similarly impactful experience? Share your own thoughts and memories in the comments.

Do you remember your high school's computer room?
Displays

The 25-Year Success Story of SereneScreen (pcgamer.com) 24

A recent video from retro tech YouTuber Clint "LGR" Basinger takes a deep dive into the history of the SereneScreen Marine Aquarium, exploring how former Air Force pilot Jim Sachs transformed a lackluster Windows 95 screensaver into a 25-year digital phenomenon. PC Gamer reports: The story centers on Jim Sachs, a man with one of those "they don't make this type of guy anymore" life stories so common to '80s and '90s computing, one Sachs recounted to the website AmigaLove back in 2020. After a six-year career in the US Air Force flying C-141 Starlifters, Sachs taught himself programming and digital art and began creating games for Commodore 64 and Amiga computers. From his first game, Saucer Attack, to later efforts like Defender of the Crown or his large portfolio of promotional and commissioned pieces, Sach's pixel art remains gorgeous and impressive to this day, and he seems to be a bit of a legend among Commodore enthusiasts.

It's with this background in games and digital art that Sachs looked at Microsoft's simple aquarium-themed screensaver for Windows 95 and 98 and thought he could do better. "Microsoft had an aquarium that they gave away with Windows where it was just bitmaps of fish being dragged across the screen," Sachs told the Matt Chat podcast back in 2015. "And they had that for like, three or four years. And I thought, I've given them enough time, I'm taking them to market. I'm gonna do something which will just blow that away."

Using reference photographs of real aquariums -- Sachs thanked a specific pet shop that's still around in an early version of his website" -- Sachs created the 3D art by hand and programmed the screensaver in C++, releasing the initial version in July 2000. Even looking at it all these years later, the first iteration of the SereneScreen Marine Aquarium is pretty gorgeous, and it has the added charm of being such a distinctly Y2K, nostalgic throwback.

The standalone screensaver sold well, but then things came full circle with Microsoft licensing a version of the Marine Aquarium for the Windows XP Plus Pack and later standard releases of the OS. Since that time, the Marine Aquarium has continued to see new releases, and a section on the SereneScreen website keeps track of its various appearances in the background of movies and TV shows like Law and Order. Over on the SereneScreen website, you can purchase a real time, 3D-accelerated version of the Marine Aquarium for Mac, iOS, Android, and the original Windows. Echoing the Windows XP deal, Roku actually licensed this 3.0 version for its TVs, bringing it to a new generation of users.

AI

OpenAI's o3-mini: Faster, Cheaper AI That Fact-Checks Itself (openai.com) 73

OpenAI today launched o3-mini, a specialized AI reasoning model designed for STEM tasks that offers faster processing at lower costs compared to its predecessor o1-mini. The model, priced at $1.10 per million cached input tokens and $4.40 per million output tokens, performs fact-checking before delivering results to reduce errors in technical domains like physics and programming, the Microsoft-backed startup said. (A million tokens are roughly 750,000 words)

OpenAI claims that its tests showed o3-mini made 39% fewer major mistakes than o1-mini on complex problems while delivering responses 24% faster. The model will be available through ChatGPT with varying access levels -- free users get basic access while premium subscribers receive higher query limits and reasoning capabilities.
Communications

AM Radio For All Vehicles Legislation Reintroduced (agweek.com) 269

A bipartisan group of legislators has reintroduced the AM Radio for Every Vehicle Act of 2025, aiming to mandate AM radio in all new vehicles at no additional cost. Adweek reports: The AM Radio for Every Vehicle Act was first introduced in May 2023. It continued to take on new co-sponsors through the fall of 2024. It was reintroduced as the AM Radio for Every Vehicle Act of 2025 with 62 cosponsors. Upper Midwest senators showing support for the bill include Sens. John Hoeven and Kevin Cramer R-North Dakota, and Sens. Amy Klobuchar and Tina Smith, DFL- Minnesota.

If enacted, the bill would require the Department of Transportation to issue a rule requiring new vehicles to maintain access to broadcast AM radio at no additional cost to the consumer and provide small vehicle manufacturers at least four years after the date DOT issues the rule to comply. The act also requires automakers to inform consumers, during the period before the rule takes effect, that the vehicles do not maintain access to broadcast AM radio.
"With 82 million Americans tuning in each month, AM radio delivers more than just emergency alerts," says the National Association of Broadcasters in a news release. "It connects communities through hyper-local content, including news, weather and diverse cultural programming," according to a news release from the National Association of Broadcasters."
Education

New Michigan Law Requires High Schools to Offer CS Classes (michigan.gov) 66

The state of Michigan will now require each public high school in the state to offer at least one computer science course to its students. "This bill aligns Michigan with a majority of the country," according to the state's announcement, which says the bill "advances technological literacy" and ensures their students "are well-equipped with the critical thinking skills necessary for success in the workforce."

Slashdot reader theodp writes: From the Michigan House Fiscal Agency Analysis: "Supporters of the bill say that increasing access to computer science courses for students in schools should be a priority of the state in order to ensure that students can compete for the types of jobs that have good pay and will be needed in the coming decades."

That analysis goes on to report that testifying in favor of the bill were tech-giant backed nonprofit Code.org (Microsoft is a $30 million Code.org donor), Amazon and AWS (Amazon is a $30+ million Code.org donor), the tech-supported Computer Science Teachers Association (CSTA), and the lobbying organization TechNet, whose members include Amazon, Apple, Google, Meta, and OpenAI).

It's not clear how many high schools in Michigan are already teaching CS courses, but this still raises a popular question for discussion. Should high schools be required to teach at least one CS course?
Books

Bill Gates Began the Altair BASIC Code in His Head While Hiking as a Teenager (msn.com) 134

Friday Bill Gates shared an excerpt from his upcoming memoir Source Code: My Beginnings. Published in the Wall Street Journal, the excerpt includes pictures of young Bill Gates when he was 12 (dressed for a hike) and 14 (studying a teletype machine).

Gates remembers forming "a sort of splinter group" from the Boy Scouts when he was 13 with a group of boys who "wanted more freedom and more risk" and took long hikes around Seattle, travelling hundreds of miles together on hikes as long as "seven days or more." (His favorite breakfast dish was Oscar Mayer Smokie Links.) But he also remembers another group of friends — Kent, Rick, and... Paul — who connected to a mainframe computer from a phone line at their private school. Both hiking and programming "felt like an adventure... exploring new worlds, traveling to places even most adults couldn't reach."

Like hiking, programming fit me because it allowed me to define my own measure of success, and it seemed limitless, not determined by how fast I could run or how far I could throw. The logic, focus and stamina needed to write long, complicated programs came naturally to me. Unlike in hiking, among that group of friends, I was the leader.
When Gates' school got a (DEC) PDP-8 — which cost $8,500 — "For a challenge, I decided I would try to write a version of the Basic programming language for the new computer..." And Gates remembers a long hike where "I silently honed my code" for its formula evaluator: I slimmed it down more, like whittling little pieces off a stick to sharpen the point. What I made seemed efficient and pleasingly simple. It was by far the best code I had ever written...

By the time school started again in the fall, whoever had lent us the PDP-8 had reclaimed it. I never finished my Basic project. But the code I wrote on that hike, my formula evaluator — and its beauty — stayed with me. Three and a half years later, I was a sophomore in college not sure of my path in life when Paul Allen, one of my Lakeside friends, burst into my dorm room with news of a groundbreaking computer. I knew we could write a Basic language for it; we had a head start.

Gates typed his code from that hike, "and with that planted the seed of what would become one of the world's largest companies and the beginning of a new industry."

Gates cites Richard Feynman's description of the excitement and pleasure of "finding the thing out" — the reward for "all of the disciplined thinking and hard work." And he remembers his teenaged years as "intensely driven by the love of what I was learning, accruing expertise just when it was needed: at the dawn of the personal computer."
AI

Cutting-Edge Chinese 'Reasoning' Model Rivals OpenAI o1 55

An anonymous reader quotes a report from Ars Technica: On Monday, Chinese AI lab DeepSeek released its new R1 model family under an open MIT license, with its largest version containing 671 billion parameters. The company claims the model performs at levels comparable to OpenAI's o1 simulated reasoning (SR) model on several math and coding benchmarks. Alongside the release of the main DeepSeek-R1-Zero and DeepSeek-R1 models, DeepSeek published six smaller "DeepSeek-R1-Distill" versions ranging from 1.5 billion to 70 billion parameters. These distilled models are based on existing open source architectures like Qwen and Llama, trained using data generated from the full R1 model. The smallest version can run on a laptop, while the full model requires far more substantial computing resources.

The releases immediately caught the attention of the AI community because most existing open-weights models -- which can often be run and fine-tuned on local hardware -- have lagged behind proprietary models like OpenAI's o1 in so-called reasoning benchmarks. Having these capabilities available in an MIT-licensed model that anyone can study, modify, or use commercially potentially marks a shift in what's possible with publicly available AI models. "They are SO much fun to run, watching them think is hilarious," independent AI researcher Simon Willison told Ars in a text message. Willison tested one of the smaller models and described his experience in a post on his blog: "Each response starts with a ... pseudo-XML tag containing the chain of thought used to help generate the response," noting that even for simple prompts, the model produces extensive internal reasoning before output.
Although the benchmarks have yet to be independently verified, DeepSeek reports that R1 outperformed OpenAI's o1 on AIME (a mathematical reasoning test), MATH-500 (a collection of word problems), and SWE-bench Verified (a programming assessment tool).

TechCrunch notes that three Chinese labs -- DeepSeek, Alibaba, and Moonshot AI's Kimi, have released models that match o1's capabilities.
Linux

Linux 6.13 Released (phoronix.com) 25

"Nothing horrible or unexpected happened last week," Linux Torvalds posted tonight on the Linux kernel mailing list, "so I've tagged and pushed out the final 6.13 release."

Phoronix says the release has "plenty of fine features": Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors. The new AMD EPYC 9005 "Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency....

Linux 6.13 also brings more Rust programming language infrastructure and more.

Phoronix notes that Linux 6.13 also brings "the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler."

And some lucky Linux kernel developers will also be getting a guitar pedal soldered by Linus Torvalds himself, thanks to a generous offer he announced a week ago: For _me_ a traditional holiday activity tends to be a LEGO build or two, since that's often part of the presents... But in addition to the LEGO builds, this year I also ended up doing a number of guitar pedal kit builds ("LEGO for grown-ups with a soldering iron"). Not because I play guitar, but because I enjoy the tinkering, and the guitar pedals actually do something and are the right kind of "not very complex, but not some 5-minute 555 LED blinking thing"...

[S]ince I don't actually have any _use_ for the resulting pedals (I've already foisted off a few only unsuspecting victims^Hfriends), I decided that I'm going to see if some hapless kernel developer would want one.... as an admittedly pretty weak excuse to keep buying and building kits...

"It may be worth noting that while I've had good success so far, I'm a software person with a soldering iron. You have been warned... [Y]ou should set your expectations along the lines of 'quality kit built by a SW person who doesn't know one end of a guitar from the other.'"
Programming

Node.js 'Type Stripping' for TypeScript Now Enabled by Default (hashnode.dev) 63

The JavaScript runtime Node.js can execute TypeScript (Microsoft's JavaScript-derived language with static typing).

But now it can do it even better, explains Marco Ippolito of the Node.js steering committee: In August 2024 Node.js introduced a new experimental feature, Type Stripping, aimed at addressing a longstanding challenge in the Node.js ecosystem: running TypeScript with no configuration. Enabled by default in Node.js v23.6.0, this feature is on its way to becoming stable.

TypeScript has reached incredible levels of popularity and has been the most requested feature in all the latest Node.js surveys. Unlike other alternatives such as CoffeeScript or Flow, which never gained similar traction, TypeScript has become a cornerstone of modern development. While it has been supported in Node.js for some time through loaders, they relied heavily on configuration and user libraries. This reliance led to inconsistencies between different loaders, making them difficult to use interchangeably. The developer experience suffered due to these inconsistencies and the extra setup required... The goal is to make development faster and simpler, eliminating the overhead of configuration while maintaining the flexibility that developers expect...

TypeScript is not just a language, it also relies on a toolchain to implement its features. The primary tool for this purpose is tsc, the TypeScript compiler CLI... Type checking is tightly coupled to the implementation of tsc, as there is no formal specification for how the language's type system should behave. This lack of a specification means that the behavior of tsc is effectively the definition of TypeScript's type system. tsc does not follow semantic versioning, so even minor updates can introduce changes to type checking that may break existing code. Transpilation, on the other hand, is a more stable process. It involves converting TypeScript code into JavaScript by removing types, transforming certain syntax constructs, and optionally "downleveling" the JavaScript to allow modern syntax to execute on older JavaScript engines. Unlike type checking, transpilation is less likely to change in breaking ways across versions of tsc. The likelihood of breaking changes is further reduced when we only consider the minimum transpilation needed to make the TypeScript code executable — and exclude downleveling of new JavaScript features not yet available in the JavaScript engine but available in TypeScript...

Node.js, before enabling it by default, introduced --experimental-strip-types. This mode allows running TypeScript files by simply stripping inline types without performing type checking or any other code transformation. This minimal technique is known as Type Stripping. By excluding type checking and traditional transpilation, the more unstable aspects of TypeScript, Node.js reduces the risk of instability and mostly sidesteps the need to track minor TypeScript updates. Moreover, this solution does not require any configuration in order to execute code... Node.js eliminates the need for source maps by replacing the removed syntax with blank spaces, ensuring that the original locations of the code and structure remain intact. It is transparent — the code that runs is the code the author wrote, minus the types...

"As this experimental feature evolves, the Node.js team will continue collaborating with the TypeScript team and the community to refine its behavior and reduce friction. You can check the roadmap for practical next steps..."
Google

Google Upgrades Open Source Vulnerability Scanning Tool with SCA Scanning Library (googleblog.com) 2

In 2022 Google released a tool to easily scan for vulnerabilities in dependencies named OSV-Scanner. "Together with the open source community, we've continued to build this tool, adding remediation features," according to Google's security blog, "as well as expanding ecosystem support to 11 programming languages and 20 package manager formats... Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities..."

Thursday they also announced an extensible library for "software composition analysis" scanning (as well as file-system scanning) named OSV-SCALIBR (Open Source Vulnerability — Software Composition Analysis LIBRary). The new library "combines Google's internal vulnerability management expertise into one scanning library with significant new capabilities such as:
  • Software composition analysis for installed packages, standalone binaries, as well as source code
  • OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac
  • Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more)
  • Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac
  • Software Bill of Materials (SBOM) generation in SPDX and CycloneDX, the two most popular document formats
  • Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical

"OSV-SCALIBR is now the primary software composition analysis engine used within Google for live hosts, code repos, and containers. It's been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users' data at Google scale. We offer OSV-SCALIBR primarily as an open source Go library today, and we're working on adding its new capabilities into OSV-Scanner as the primary CLI interface."


AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
Google

Google Begins Requiring JavaScript For Google Search (techcrunch.com) 91

Google says it has begun requiring users to turn on JavaScript, the widely-used programming language to make web pages interactive, in order to use Google Search. From a report: In an email to TechCrunch, a company spokesperson claimed that the change is intended to "better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly, and that the quality of search results tends to be degraded.
Transportation

Toyota Unit Hino Motors Reaches $1.6 Billion US Diesel Emissions Settlement (msn.com) 8

An anonymous reader quotes a report from Reuters: Toyota Motor unit Hino Motors has agreed a $1.6 billion settlement with U.S. agencies and will plead guilty over excess diesel engine emissions in more than 105,000 U.S. vehicles, the company and U.S. government said on Wednesday. The Japanese truck and engine manufacturer was charged with fraud in U.S. District Court in Detroit for unlawfully selling 105,000 heavy-duty diesel engines in the United States from 2010 through 2022 that did not meet emissions standards. The settlement, which still must be approved by a U.S. judge, includes a criminal penalty of $521.76 million, $442.5 million in civil penalties to U.S. authorities and $236.5 million to California.

A company-commissioned panel said in a report in 2022 Hino had falsified emissions data on some engines going back to at least 2003. Hino agreed to plead guilty to engaging in a multi-year criminal conspiracy and serve a five-year term of probation, during which it will be barred from importing any diesel engines it has manufactured into the U.S., and carry out a comprehensive compliance and ethics program, the Justice Department and Environmental Protection Agency said. [...] The settlement includes a mitigation program, valued at $155 million, to offset excess air emissions from the violations by replacing marine and locomotive engines, and a recall program, valued at $144.2 million, to fix engines in 2017-2019 heavy-duty trucks

The EPA said Hino admitted that between 2010 and 2019, it submitted false applications for engine certification approvals and altered emission test data, conducted tests improperly and fabricated data without conducting any underlying tests. Hino President Satoshi Ogiso said the company had improved its internal culture, oversight and compliance practices. "This resolution is a significant milestone toward resolving legacy issues that we have worked hard to ensure are no longer a part of Hino's operations or culture," he said in a statement.
Toyota's Hino Motors isn't the only automaker to admit to selling vehicles with excess diesel emissions. Volkswagen had to pay billions in fines after it admitted in 2015 to cheating emissions tests by installing "defeat devices" and sophisticated software in nearly 11 million vehicles worldwide. Daimler (Mercedes-Benz), BMW, Opel/Vauxhall (General Motors), and Fiat Chrysler have been implicated in similar practices.
AI

AI Slashes Google's Code Migration Time By Half (theregister.com) 74

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.
Programming

Replit CEO on AI Breakthroughs: 'We Don't Care About Professional Coders Anymore' (semafor.com) 168

Replit, an AI coding startup platform, has made a dramatic pivot away from professional programmers in a fundamental shift in how software may be created in the future. "We don't care about professional coders anymore," CEO Amjad Masad told Semafor, as the company refocuses on helping non-developers build software using AI.

The strategic shift follows the September launch of Replit's "Agent" tool, which can create working applications from simple text commands. The tool, powered by Anthropic's Claude 3.5 Sonnet AI model, has driven a five-fold revenue increase in six months. The move marks a significant departure for Replit, which built its business providing online coding tools for software developers. The company is now betting that AI will make traditional programming skills less crucial, allowing non-technical users to create software through natural language instructions.
Oracle

Oracle Won't Withdraw 'JavaScript' Trademark, Says Deno. Legal Skirmish Continues (infoworld.com) 68

"Oracle has informed us they won't voluntarily withdraw their trademark on 'JavaScript'." That's the word coming from the company behind Deno, the alternative JavaScript/TypeScript/WebAssembly runtime, which is pursuing a formal cancellation with the U.S. Patent and Trademark Office.

So what happens next? Oracle "will file their Answer, and we'll start discovery to show how 'JavaScript' is widely recognized as a generic term and not controlled by Oracle." Deno's social media posts show a schedule of various court dates that extend through July of 2026, so "The dispute between Oracle and Deno Land could go on for quite a while," reports InfoWorld: Deno Land co-founder Ryan Dahl, creator of both the Deno and Node.js runtimes, said a formal answer from Oracle is expected before February 3, unless Oracle extends the deadline again. "After that, we will begin the process of discovery, which is where the real legal work begins. It will be interesting to see how Oracle argues against our claims — genericide, fraud on the USPTO, and non-use of the mark."

The legal process begins with a discovery conference by March 5, with discovery closing by September 1, followed by pretrial disclosure from October 16 to December 15. An optional request for an oral hearing is due by July 8, 2026.

Oracle took ownership of JavaScript's trademark in 2009 when it purchased Sun Microsystems, InfoWorld notes.

But "Oracle does not control (and has never controlled) any aspect of the specification or how the phrase 'JavaScript' can be used by others," argues an official petition filed by Deno Land Inc. with the United States Patent and Trademark Office: Today, millions of companies, universities, academics, and programmers, including Petitioner, use "JavaScript" daily without any involvement with Oracle. The phrase "JavaScript" does not belong to one corporation. It belongs to the public. JavaScript is the generic name for one of the bedrock languages of modern programming, and, therefore, the Registered Mark must be canceled.

An open letter to Oracle discussing the genericness of the phrase "JavaScript," published at https://javascript.tm/, was signed by 14,000+ individuals at the time of this Petition to Cancel, including notable figures such as Brendan Eich, the creator of JavaScript, and the current editors of the JavaScript specification, Michael Ficarra and Shu-yu Guo. There is broad industry and public consensus that the term "JavaScript" is generic.

The seven-page petition goes into great detail, reports InfoWorld. "Deno Land also accused Oracle of committing fraud in its trademark renewal efforts in 2019 by submitting screen captures of the website of JavaScript runtime Node.js, even though Node.js was not affiliated with Oracle."
Programming

Ask Slashdot: What's the Best Way to Transfer Legacy PHP Code to a Modern Framework? 112

Slashdot reader rzack writes: Since 1999, I've written a huge amount of PHP code, for dozens of applications and websites. Most of it has been continually updated, and remains active and in-production, in one form or another.

Here's the thing. It's all hand-written using vi, even to this day.

Is there any benefit to migrating this codebase to a more modern PHP framework, like Laravel? And is there an easy and minimally intrusive way this can be done en-masse, across dozens of applications and websites?

Or at this point should I just stick with vi?

Share your thoughts and suggestions in the comments.

What's the best way to transfer legacy PHP code to a modern framework?
AI

Foreign Cybercriminals Bypassed Microsoft's AI Guardrails, Lawsuit Alleges (arstechnica.com) 3

"Microsoft's Digital Crimes Unit is taking legal action to ensure the safety and integrity of our AI services," according to a Friday blog post by the unit's assistant general counsel. Microsoft blames "a foreign-based threat-actor group" for "tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft's, to create offensive and harmful content.

Microsoft "is accusing three individuals of running a 'hacking-as-a-service' scheme," reports Ars Technica, "that was designed to allow the creation of harmful and illicit content using the company's platform for AI-generated content" after bypassing Microsoft's AI guardrails: They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use. Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn't know their identity.... The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site... The service, which ran from last July to September when Microsoft took action to shut it down, included "detailed instructions on how to use these custom tools to generate harmful and illicit content."

The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft's AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company's Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them. Microsoft didn't say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored...

The lawsuit alleges the defendants' service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference.

Slashdot Top Deals