AI

When a Company Does Job Interviews with a Malfunctioning AI - and Then Rejects You (slate.com) 51

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace." Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase. Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.
Programming

Rust Creator Graydon Hoare Thanks Its Many Stakeholders - and Mozilla - on Rust's 10th Anniversary (rustfoundation.org) 35

Thursday was Rust's 10-year anniversary for its first stable release. "To say I'm surprised by its trajectory would be a vast understatement," writes Rust's original creator Graydon Hoare. "I can only thank, congratulate, and celebrate everyone involved... In my view, Rust is a story about a large community of stakeholders coming together to design, build, maintain, and expand shared technical infrastructure." It's a story with many actors:

- The population of developers the language serves who express their needs and constraints through discussion, debate, testing, and bug reports arising from their experience writing libraries and applications.

- The language designers and implementers who work to satisfy those needs and constraints while wrestling with the unexpected consequences of each decision.

- The authors, educators, speakers, translators, illustrators, and others who work to expand the set of people able to use the infrastructure and work on the infrastructure.

- The institutions investing in the project who provide the long-term funding and support necessary to sustain all this work over decades.

All these actors have a common interest in infrastructure.

Rather than just "systems programming", Hoare sees Rust as a tool for building infrastructure itself, "the robust and reliable necessities that enable us to get our work done" — a wide range that includes everything from embedded and IoT systems to multi-core systems. So the story of "Rust's initial implementation, its sustained investment, and its remarkable resonance and uptake all happened because the world needs robust and reliable infrastructure, and the infrastructure we had was not up to the task." Put simply: it failed too often, in spectacular and expensive ways. Crashes and downtime in the best cases, and security vulnerabilities in the worst. Efficient "infrastructure-building" languages existed but they were very hard to use, and nearly impossible to use safely, especially when writing concurrent code. This produced an infrastructure deficit many people felt, if not everyone could name, and it was growing worse by the year as we placed ever-greater demands on computers to work in ever more challenging environments...

We were stuck with the tools we had because building better tools like Rust was going to require an extraordinary investment of time, effort, and money. The bootstrap Rust compiler I initially wrote was just a few tens of thousands of lines of code; that was nearing the limits of what an unfunded solo hobby project can typically accomplish. Mozilla's decision to invest in Rust in 2009 immediately quadrupled the size of the team — it created a team in the first place — and then doubled it again, and again in subsequent years. Mozilla sustained this very unusual, very improbable investment in Rust from 2009-2020, as well as funding an entire browser engine written in Rust — Servo — from 2012 onwards, which served as a crucial testbed for Rust language features.

Rust and Servo had multiple contributors at Samsung, Hoare acknowledges, and Amazon, Facebook, Google, Microsoft, Huawei, and others "hired key developers and contributed hardware and management resources to its ongoing development." Rust itself "sits atop LLVM" (developed by researchers at UIUC and later funded by Apple, Qualcomm, Google, ARM, Huawei, and many other organizations), while Rust's safe memory model "derives directly from decades of research in academia, as well as academic-industrial projects like Cyclone, built by AT&T Bell Labs and Cornell."

And there were contributions from "interns, researchers, and professors at top academic research programming-language departments, including CMU, NEU, IU, MPI-SWS, and many others." JetBrains and the Rust-Analyzer OpenCollective essentially paid for two additional interactive-incremental reimplementations of the Rust frontend to provide language services to IDEs — critical tools for productive, day-to-day programming. Hundreds of companies and other institutions contributed time and money to evaluate Rust for production, write Rust programs, test them, file bugs related to them, and pay their staff to fix or improve any shortcomings they found. Last but very much not least: Rust has had thousands and thousands of volunteers donating years of their labor to the project. While it might seem tempting to think this is all "free", it's being paid for! Just less visibly than if it were part of a corporate budget.

All this investment, despite the long time horizon, paid off. We're all better for it.

He looks ahead with hope for a future with new contributors, "steady and diversified streams of support," and continued reliability and compatability (including "investment in ever-greater reliability technology, including the many emerging formal methods projects built on Rust.")

And he closes by saying Rust's "sustained, controlled, and frankly astonishing throughput of work" has "set a new standard for what good tools, good processes, and reliable infrastructure software should be like.

"Everyone involved should be proud of what they've built."
AI

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms 31

An anonymous reader quotes a report from Ars Technica: Google's DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company's Gemini large language models (LLMs), with the addition of an "evolutionary" approach that evaluates and improves algorithms across a range of use cases. AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.

According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results.
DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips.

The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.
Microsoft

Microsoft Cuts Off Access To Bing Search Data as It Shifts Focus To Chatbots (wired.com) 19

Microsoft quietly announced earlier this week that it plans to shut down a long-standing tool supplying search engine startups and other software developers with a raw feed of Bing search results. From a report: The Bing Search APIs, or application programming interfaces, were once vital to many niche Google alternatives, but fell out of favor more recently as Microsoft hiked fees for the service and restricted its use.

The shutoff, which is scheduled to begin on August 11, still came as a surprise to several developers who spoke with WIRED. Customers learned of it on Monday via an email from Microsoft and a post on its website. They were directed to consider using "Grounding with Bing Search as part of Azure AI Agents," a Microsoft service that allows chatbots like ChatGPT to augment AI-generated responses with "real-time public web data." Some developers view the AI-centric alternative as an insufficient replacement.
Larger customers like DuckDuckGo told Wired they won't be affected.
Education

Is Everyone Using AI to Cheat Their Way Through College? (msn.com) 160

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife."
He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything."

Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said...

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.

The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences?
After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT."

One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
IBM

IBM CEO Says AI Has Replaced Hundreds of Workers But Created New Programming, Sales Jobs (wsj.com) 27

IBM CEO Arvind Krishna said the tech giant has used AI, and specifically AI agents, to replace the work of a couple hundred human resources workers. As a result, it has hired more programmers and salespeople, he said. From a report: Krishna's comments on Monday come as businesses sort through the workforce impacts of AI and AI agents, the independent bots that can autonomously perform tasks like analyze spreadsheets, conduct research and draft emails.

While there haven't yet been widespread layoffs or downsizing as a result of AI across the economy, some business leaders have said they are holding down head count as they investigate the use of the technology.

Meanwhile, the information-technology workforce has continued to shrink as AI weighs on hiring and some workers leave the field. For IBM, which this week hosts its annual Think conference in Boston, AI adoption has led it to boost hiring in some functions.

Ubuntu

Memory-Safe Sudo To Become the Default In Ubuntu 116

Longtime Slashdot reader RoccamOccam shares a blog post from the Trifecta Tech Foundation, a nonprofit organization that creates secure, open source building blocks for infrastructure software. The foundation is also the developer behind Sudo-rs. From the report: Ubuntu 25.10 is set to adopt sudo-rs by default. Sudo-rs is a memory-safe reimplementation of the widely-used sudo utility, written in the Rust programming language. This move is part of a broader effort by Canonical to improve the resilience and maintainability of core system components. [...]

The decision to adopt sudo-rs is in line with Canonical's commitment to Carefully But Purposefully increase the resilience of critical system software, by adopting Rust. Rust is a programming language with strong memory safety guarantees that eliminates many of the vulnerabilities that have historically plagued traditional C-based software. Sudo-rs is part of the Trifecta Tech Foundation's Privilege Boundary initiative, which aims to handle privilege escalation with memory-safe alternatives.
Open Source

May is 'Maintainer Month'. Open Source Initiative Joins GitHub to Celebrate Open Source Security (opensource.org) 6

The Open Source Initiative is joining "a global community of contributors" for GitHub's annual event "honoring the individuals who steward and sustain Open Source projects."

And the theme of the 5th Annual "Maintainer Month" will be: securing Open Source: Throughout the month, OSI and our affiliates will be highlighting maintainers who prioritize security in their projects, sharing their stories, and providing a platform for collaboration and learning... Maintainer Month is a time to gather, share knowledge, and express appreciation for the people who keep Open Source projects running. These maintainers not only review issues and merge pull requests — they also navigate community dynamics, mentor new contributors, and increasingly, adopt security best practices to protect their code and users....

- OSI will publish a series of articles on Opensource.net highlighting maintainers whose work centers around security...

- As part of our programming for May, OSI will host a virtual Town Hall [May 21st] with our affiliate organizations and invite the broader Open Source community to join....

- Maintainer Month is also a time to tell the stories of those who often work behind the scenes. OSI will be amplifying voices from across our affiliate network and encouraging communities to recognize the people whose efforts are often invisible, yet essential.

"These efforts are not just celebrations — they are opportunities to recognize the essential role maintainers play in safeguarding the Open Source infrastructure that underpins so much of our digital world," according to the OSI's announcement. And this year they're focusing on three key areas of open source security:
  • Adopting security best practices in projects and communities
  • Recognizing contributors who improve project security
  • Collaborating to strengthen the ecosystem as a whole

AI

Apple, Anthropic Team Up To Build AI-Powered 'Vibe-Coding' Platform (bloomberg.com) 16

An anonymous reader shares a report: Apple is teaming up with startup Anthropic on a new "vibe-coding" software platform that will use AI to write, edit and test code on behalf of programmers.

The system is a new version of Xcode, Apple's programming software, that will integrate Anthropic's Claude Sonnet model, according to people with knowledge of the matter. Apple will roll out the software internally and hasn't yet decided whether to launch it publicly, said the people, who asked not to be identified because the initiative hasn't been announced.

The work shows how Apple is using AI to improve its internal workflow, aiming to speed up and modernize product development. The approach is similar to one used by companies such as Windsurf and Cursor maker Anysphere, which offer advanced AI coding assistants popular with software developers.
Further reading: 'Vibe Coding' is Letting 10 Engineers Do the Work of a Team of 50 To 100, Says YC CEO.
Bug

Why Windows 7 Took Forever To Load If You Had a Solid Background (pcworld.com) 57

An anonymous reader quotes a report from PCWorld: Windows 7 came onto the market in 2009 and put Microsoft back on the road to success after Windows Vista's annoying failures. But Windows 7 was not without its faults, as this curious story proves. Some users apparently encountered a vexing problem at the time: if they set a single-color image as the background, their Windows 7 PC always took 30 seconds to start the operating system and switch from the welcome screen to the desktop.

In a recent blog post, Microsoft veteran Raymond Chen explains the exact reason for this. According to him, a simple programming error meant that users had to wait longer for the system to boot. After logging in, Windows 7 first set up the desktop piece by piece, i.e. the taskbar, the desktop window, icons for applications, and even the background image. The system waited patiently for all components to finish loading and received feedback from each individual component. Or, it switched from the welcome screen to the desktop after 30 seconds if it didn't receive any feedback.

The problem here: The code for the message that the background image is ready was located within the background image bitmap code, which means that the message never appeared if you did not have a real background image bitmap. And a single color is not such a bitmap. The result: the logon system waited in vain for the message that the background has finished loading, so Windows 7 never started until the 30 second fallback activated and sent users to the desktop. The problem could also occur if users had activated the "Hide desktop icons" group policy. This was due to the fact that such policies were only added after the main code had been written and called by an If statement. However, Windows 7 was also unable to recognize this at first and therefore took longer to load.

Movies

Netflix Introduces a New Kind of Subtitles For the Non-Hearing Impaired (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: Multiple studies and investigations have found that about half of American households watch TV and movies with subtitles on, but only a relatively small portion of those include someone with a hearing disability. That's because of the trouble many people have understanding dialogue in modern viewing situations, and Netflix has now introduced a subtitles option to help.

The closed captioning we've all been using for years includes not only the words the people on-screen are saying, but additional information needed by the hard of hearing, including character names, music cues ("dramatic music intensifies") and sound effects ("loud explosion"). For those who just wanted to make sure they didn't miss a word here and there, the frequent descriptions of sound effects and music could be distracting. This new format omits those extras, just including the spoken words and nothing else -- even in the same language as the spoken dialogue. The feature will be available in new Netflix original programming, starting with the new season of You in multiple languages. Netflix says it's looking at bringing the option to older titles in the library (including those not produced by Netflix) in the future.

Traditional closed captions are still available, of course. Those are labeled "English CC" whereas this new option is simply labeled "English" (or whatever your preferred language is).

Programming

AI Tackles Aging COBOL Systems as Legacy Code Expertise Dwindles 76

US government agencies and Fortune 500 companies are turning to AI to modernize mission-critical systems built on COBOL, a programming language dating back to the late 1950s. The US Social Security Administration plans a three-year, $1 billion AI-assisted upgrade of its legacy COBOL codebase [alternative source], according to Bloomberg.

Treasury Secretary Scott Bessent has repeatedly stressed the need to overhaul government systems running on COBOL. As experienced programmers retire, organizations face growing challenges maintaining these systems that power everything from banking applications to pension disbursements. Engineers now use tools like ChatGPT and IBM's watsonX to interpret COBOL code, create documentation, and translate it to modern languages.
AI

OpenAI Debuts Codex CLI, an Open Source Coding Tool For Terminals (techcrunch.com) 9

OpenAI has released Codex CLI, an open-source coding agent that runs locally in users' terminal software. Announced alongside the company's new o3 and o4-mini models, Codex CLI directly connects OpenAI's AI systems with local code and computing tasks, enabling them to write and manipulate code on users' machines.

The lightweight tool allows developers to leverage multimodal reasoning capabilities by passing screenshots or sketches to the model while providing access to local code repositories. Unlike more ambitious future plans for an "agentic software engineer" that could potentially build entire applications from descriptions, Codex CLI focuses specifically on integrating AI models with command-line interfaces.

To accelerate adoption, OpenAI is distributing $1 million in API credits through a grant program, offering $25,000 blocks to selected projects. While the tool expands AI's role in programming workflows, it comes with inherent risks -- studies show AI coding models frequently fail to fix security vulnerabilities and sometimes introduce new bugs, particularly concerning when given system-level access.
Education

Palantir's 'Meritocracy Fellowship' Urges High School Grads to Skip College's 'Indoctrination' and Debt (thestreet.com) 122

Stanford law school graduate Peter Thiel later co-founded Facebook, PayPal, and Palantir. But in 2010 Thiel also created the Thiel Fellowship, which annually gives 20 to 30 people under the age of 23 $100,000 "to encourage students to not stick around college." (College students must drop out in order to accept the fellowship.)

And now Palantir "is taking a similar approach as it maneuvers to attract new talent," reports financial news site The Street: The company has launched what it refers to as the "Meritocracy Fellowship," a four-month internship program for recent high school graduates who have not enrolled in college. The position pays roughly $5,400 per month, more than plenty of post-college internship programs. Palantir's job posting suggests that the company is especially interested in candidates with experience in programming and statistical analysis.
Palantir's job listing specifically says they launched their four-month fellowship "in response to the shortcomings of university admissions," promising it would be based "solely on merit and academic excellence" (requiring an SAT score over 1459 or an ACT score above 32.) "Opaque admissions standards at many American universities have displaced meritocracy and excellence..." As a result, qualified students are being denied an education based on subjective and shallow criteria. Absent meritocracy, campuses have become breeding grounds for extremism and chaos... Skip the debt. Skip the indoctrination. Get the Palantir Degree...

Upon successful completion of the Meritocracy Fellowship, fellows that have excelled during their time at Palantir will be given the opportunity to interview for full-time employment at Palantir.

Programming

AI Models Still Struggle To Debug Software, Microsoft Study Shows (techcrunch.com) 43

Some of the best AI models today still struggle to resolve software bugs that wouldn't trip up experienced devs. TechCrunch: A new study from Microsoft Research, Microsoft's R&D division, reveals that models, including Anthropic's Claude 3.7 Sonnet and OpenAI's o3-mini, fail to debug many issues in a software development benchmark called SWE-bench Lite. The results are a sobering reminder that, despite bold pronouncements from companies like OpenAI, AI is still no match for human experts in domains such as coding.

The study's co-authors tested nine different models as the backbone for a "single prompt-based agent" that had access to a number of debugging tools, including a Python debugger. They tasked this agent with solving a curated set of 300 software debugging tasks from SWE-bench Lite.

According to the co-authors, even when equipped with stronger and more recent models, their agent rarely completed more than half of the debugging tasks successfully. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by OpenAI's o1 (30.2%), and o3-mini (22.1%).

Python

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp) 1

Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.
They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.
Networking

Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht (x.com) 18

Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of TekLibre."

But on X.com Eric S. Raymond called him "one of the unsung heroes of the Internet, and a close friend of mine who I will miss very badly." Dave, known on X as @mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters.
Also on X, legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".)

Long-time Slashdot reader TheBracket remembers him as "the driving force behind ">the Bufferbloat project and a contributor to FQ-CoDel, and CAKE in the Linux kernel."

Dave spent years doing battle with Internet latency and bufferbloat, contributing to countless projects. In recent years, he's been working with Robert, Frank and myself at LibreQoS to provide CAKE at the ISP level, helping Starlink with their latency and bufferbloat, and assisting the OpenWrt project.
Eric Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..."

Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis." He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking...

In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. Ave atque vale, Dave.

Weeks before his death Täht was still active on X.com, retweeting LWN's article about "The AI scraperbot scourge", an announcement from Texas Instruments, and even a Slashdot headline.

Täht was also Slashdot reader #603,670, submitting stories about network latency, leaving comments about AI, and making announcements about the Bufferbloat project.
AI

95% of Code Will Be AI-Generated Within Five Years, Microsoft CTO Says 130

Microsoft Chief Technology Officer Kevin Scott has predicted that AI will generate 95% of code within five years. Speaking on the 20VC podcast, Scott said AI would not replace software engineers but transform their role. "It doesn't mean that the AI is doing the software engineering job.... authorship is still going to be human," Scott said.

According to Scott, developers will shift from writing code directly to guiding AI through prompts and instructions. "We go from being an input master (programming languages) to a prompt master (AI orchestrator)," he said. Scott said the current AI systems have significant memory limitations, making them "awfully transactional," but predicted improvements within the next year.
Programming

'There is No Vibe Engineering' 121

Software engineer Sergey Tselovalnikov weighs in on the new hype: The term caught on and Twitter quickly flooded with posts about how AI has radically transformed coding and will soon replace all software engineers. While AI undeniably impacts the way we write code, it hasn't fundamentally changed our role as engineers. Allow me to explain.

[...] Vibe coding is interacting with the codebase via prompts. As the implementation is hidden from the "vibe coder", all the engineering concerns will inevitably get ignored. Many of the concerns are hard to express in a prompt, and many of them are hard to verify by only inspecting the final artifact. Historically, all engineering practices have tried to shift all those concerns left -- to the earlier stages of development when they're cheap to address. Yet with vibe coding, they're shifted very far to the right -- when addressing them is expensive.

The question of whether an AI system can perform the complete engineering cycle and build and evolve software the same way a human can remains open. However, there are no signs of it being able to do so at this point, and if it one day happens, it won't have anything to do with vibe coding -- at least the way it's defined today.

[...] It is possible that there'll be a future where software is built from vibe-coded blocks, but the work of designing software able to evolve and scale doesn't go away. That's not vibe engineering -- that's just engineering, even if the coding part of it will look a bit different.
Programming

'No Longer Think You Should Learn To Code,' Says CEO of AI Coding Startup (x.com) 108

Learning to code has become sort of become pointless as AI increasingly dominates programming tasks, said Replit founder and chief executive Amjad Masad. "I no longer think you should learn to code," Masad wrote on X.

The statement comes as major tech executives report significant AI inroads into software development. Google CEO Sundar Pichai recently revealed that 25% of new code at the tech giant is AI-generated, though still reviewed by engineers. Furthermore, Anthropic CEO Dario Amodei predicted AI could generate up to 90% of all code within six months.

Masad called this shift a "bittersweet realization" after spending years popularizing coding through open-source work, Codecademy, and Replit -- a platform that now uses AI to help users build apps and websites. Instead of syntax-focused programming skills, Masad recommends learning "how to think, how to break down problems... how to communicate clearly, with humans and with machines."

Slashdot Top Deals