Programming

Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model (9to5mac.com) 7

"Apple quietly dropped a new AI model on Hugging Face with an interesting twist," writes 9to5Mac. "Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once."

"The result is faster code generation, at a performance that rivals top open-source coding models." Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom... An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested...

Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising... This behavior is especially useful for programming, where global structure matters more than linear token prediction... [Apple] released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month... [W]ith an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there.

Even more interestingly, Apple's model is built on top of Qwen2.5-7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5-Coder-7B), then Apple took it and made its own adjustments. They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples.

"Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion..." the article points out.

But "the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas."
AI

'Vibe Coder' Who Doesn't Know How to Code Keeps Winning Hackathons in San Francisco (sfstandard.com) 173

An anonymous reader shared this report from the San Francisco Standard: About an hour into my meeting with the undisputed hackathon king of San Francisco, Rene Turcios asked if I wanted to smoke a joint with him. I politely declined, but his offer hardly surprised me. Turcios has built a reputation as a cannabis-loving former professional Yu-Gi-Oh! player who resells Labubus out of his Tenderloin apartment when he's not busy attending nearly every hackathon happening in the city. Since 2023, Turcios, 29, has attended more than 200 events, where he's won cash, software credits, and clout. "I'm always hustling," he said.

The craziest part: he doesn't even know how to code.

"Rene is the original vibe coder," said RJ Moscardon, a friend and fellow hacker who watched Turcios win second place at his first-ever hackathon at the AGI House mansion in Hillsborough. "All the engineers with prestigious degrees scoffed at him at first. But now they're all doing exactly the same thing...." Turcios was vibe coding long before the technique had a name — and was looked down upon by longtime hackers for using AI. But as Tiger Woods once said, "Winning takes care of everything...."

Instead of vigorously coding until the deadline, he finished his projects hours early by getting AI to do the technical work for him. "I didn't write a single line of code," Turcios said of his first hackathon where he prompted ChatGPT using plain English to generate a program that can convert any song into a lo-fi version. When the organizers announced Turcios had won second place, he screamed in celebration.... "I realized that I could compete with people who have degrees and fancy jobs...."

Turcios is now known for being able to build anything quickly. Businesses reach out to him to contract out projects that would take software engineering teams weeks — and he delivers in hours. He's even started running workshops to teach non-technical groups and experienced software engineers how to get the most out of AI for coding.

"He grew up in Missouri to parents who worked in an international circus, taming bears and lions..."
Programming

How Do You Teach Computer Science in the Age of AI? (thestar.com.my) 177

"A computer science degree used to be a golden ticket to the promised land of jobs," a college senior tells the New York Times. But "That's no longer the case."

The article notes that in the last three years there's been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies "relying more on AI for some aspects of coding, eliminating some entry-level work."

So what do college professors teach when AI "is coming fastest and most forcefully to computer science"? Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy... Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills.

The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of "a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce," said Mary Lou Maher, a computer scientist and a director of the Computing Research Association.

The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.

The article raises other possibilities. Experts also suggest the possibility of "a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets." Stanford CS professor Alex Aiken even argues that "The growth in software engineering jobs may decline, but the total number of people involved in programming will increase."

Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school's undergraduate programs believes that coursework "should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools."
Programming

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 18

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

AI

AI Coding Agents Are Already Commoditized (seangoedecke.com) 64

Software engineer Sean Goedecke argues that AI coding agents have already been commoditized because they require no special technical advantages, just better base models. He writes: All of a sudden, it's the year of AI coding agents. Claude released Claude Code, OpenAI released their Codex agent, GitHub released its own autonomous coding agent, and so on. I've done my fair share of writing about whether AI coding agents will replace developers, and in the meantime how best to use them in your work. Instead, I want to make what I think is now a pretty firm observation: AI coding agents have no secret sauce.

[...] The reason everyone's doing agents now is the same reason everyone's doing reinforcement learning now -- from one day to the next, the models got good enough. Claude Sonnet 3.7 is the clear frontrunner here. It's not the smartest model (in my opinion), but it is the most agentic: it can stick with a task and make good decisions over time better than other models with more raw brainpower. But other AI labs have more agentic models now as well. There is no moat.

There's also no moat to the actual agent code. It turns out that "put the model in a loop with a 'read file' and 'write file' tool" is good enough to do basically anything you want. I don't know for sure that the closed-source options operate like this, but it's an educated guess. In other words, the agent hackers in 2023 were correct, and the only reason they couldn't build Claude Code then was that they were too early to get to use the really good models.

Software

The Software Engineering 'Squeeze' (manager.dev) 113

Software developer Anton Zaides argues that software engineers have had it easy over the decades and the "best profession" on earth deserved the wake up call. He writes:It's not just one of the hardest times, it's also one of the most exciting.

I'm hugely optimistic about the software engineering career. All those companies started by vibe-coders all around you? Many will succeed, and will need great engineers to scale up.

Some engineers understand this, and use the chance to skill up. To succeed, you'll probably need all the skills of an engineer, some of a PM, and even a bit of design taste. It's not just about shipping code anymore.

But if you work as a code monkey, getting detailed tickets and just shipping them, you've done this to yourself. You won't be needed pretty soon.

I believe there are too many mediocre engineers, but also not enough great ones.

Movies

NASA To Stream Rocket Launches and Spacewalks On Netflix (nerds.xyz) 18

BrianFagioli shares a report from NERDS.xyz: NASA is coming to Netflix. No, not a drama or sci-fi reboot. The space agency is actually bringing real rocket launches, astronaut spacewalks, and even views of Earth from space directly to your favorite streaming service. Starting this summer, NASA+ will be available on Netflix, giving the space-curious a front-row seat to live mission coverage and other programming.

The space agency is hoping this move helps it connect with a much bigger audience, and considering Netflix reaches over 700 million people, that's not a stretch. This partnership is about accessibility. NASA already offers NASA+ for free, without ads, through its app and website. But now it's going where the eyeballs are. If people won't come to the space agency, the space agency will come to them.

Security

New NSA/CISA Report Again Urges the Use of Memory-Safe Programming Language (theregister.com) 66

An anonymous reader shared this report from the tech news site The Register: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) this week published guidance urging software developers to adopt memory-safe programming languages. "The importance of memory safety cannot be overstated," the inter-agency report says...

The CISA/NSA report revisits the rationale for greater memory safety and the government's calls to adopt memory-safe languages (MSLs) while also acknowledging the reality that not every agency can change horses mid-stream. "A balanced approach acknowledges that MSLs are not a panacea and that transitioning involves significant challenges, particularly for organizations with large existing codebases or mission-critical systems," the report says. "However, several benefits, such as increased reliability, reduced attack surface, and decreased long-term costs, make a strong case for MSL adoption."

The report cites how Google by 2024 managed to reduce memory safety vulnerabilities in Android to 24 percent of the total. It goes on to provide an overview of the various benefits of adopting MSLs and discusses adoption challenges. And it urges the tech industry to promote memory safety by, for example, advertising jobs that require MSL expertise.

It also cites various government projects to accelerate the transition to MSLs, such as the Defense Advanced Research Projects Agency (DARPA) Translating All C to Rust (TRACTOR) program, which aspires to develop an automated method to translate C code to Rust. A recent effort along these lines, dubbed Omniglot, has been proposed by researchers at Princeton, UC Berkeley, and UC San Diego. It provides a safe way for unsafe libraries to communicate with Rust code through a Foreign Function Interface....

"Memory vulnerabilities pose serious risks to national security and critical infrastructure," the report concludes. "MSLs offer the most comprehensive mitigation against this pervasive and dangerous class of vulnerability."

"Adopting memory-safe languages can accelerate modern software development and enhance security by eliminating these vulnerabilities at their root," the report concludes, calling the idea "an investment in a secure software future."

"By defining memory safety roadmaps and leading the adoption of best practices, organizations can significantly improve software resilience and help ensure a safer digital landscape."
AI

AI Improves At Improving Itself Using an Evolutionary Trick (ieee.org) 41

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv.

From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges...

The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems."

... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)

As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."
Desktops (Apple)

After 27 Years, Engineer Discovers How To Display Secret Photo In Power Mac ROM (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: On Tuesday, software engineer Doug Brown published his discovery of how to trigger a long-known but previously inaccessible Easter egg in the Power Mac G3's ROM: a hidden photo of the development team that nobody could figure out how to display for 27 years. While Pierre Dandumont first documented the JPEG image itself in 2014, the method to view it on the computer remained a mystery until Brown's reverse engineering work revealed that users must format a RAM disk with the text "secret ROM image."

Brown stumbled upon the image while using a hex editor tool called Hex Fiend with Eric Harmon's Mac ROM template to explore the resources stored in the beige Power Mac G3's ROM. The ROM appeared in desktop, minitower, and all-in-one G3 models from 1997 through 1999. "While I was browsing through the ROM, two things caught my eye," Brown wrote. He found both the HPOE resource containing the JPEG image of team members and a suspicious set of Pascal strings in the PowerPC-native SCSI Manager 4.3 code that included ".Edisk," "secret ROM image," and "The Team."

The strings provided the crucial clue Brown needed. After extracting and disassembling the code using Ghidra, he discovered that the SCSI Manager was checking for a RAM disk volume named "secret ROM image." When found, the code would create a file called "The Team" containing the hidden JPEG data. Brown initially shared his findings on the #mac68k IRC channel, where a user named Alex quickly figured out the activation method. The trick requires users to enable the RAM Disk in the Memory control panel, restart, select the RAM Disk icon, choose "Erase Disk" from the Special menu, and type "secret ROM image" into the format dialog. "If you double-click the file, SimpleText will open it," Brown explains on his blog just before displaying the hidden team photo that emerges after following the steps.

Android

Apple's Swift Coding Language Is Working On Android Support (9to5google.com) 44

Apple's Swift programming language is expanding official support to Android through a new "Android Working Group" which will improve compatibility, integration, and tooling. "As it stands today, Android apps are generally coded in Kotlin, but Apple is looking to provide its Swift coding language as an alternative," notes 9to5Google. "Apple first launched its coding language back in 2014 with its own platforms in mind, but currently also supports Windows and Linux officially." From the report: A few of the key pillars the Working Group will look to accomplish include:

- Improve and maintain Android support for the official Swift distribution, eliminating the need for out-of-tree or downstream patches
- Recommend enhancements to core Swift packages such as Foundation and Dispatch to work better with Android idioms
- Work with the Platform Steering Group to officially define platform support levels generally, and then work towards achieving official support of a particular level for Android
- Determine the range of supported Android API levels and architectures for Swift integration
- Develop continuous integration for the Swift project that includes Android testing in pull request checks.
- Identify and recommend best practices for bridging between Swift and Android's Java SDK and packaging Swift libraries with Android apps
- Develop support for debugging Swift applications on Android
- Advise and assist with adding support for Android to various community Swift packages

Programming

'The Computer-Science Bubble Is Bursting' 128

theodp writes: The job of the future might already be past its prime," writes The Atlantic's Rose Horowitch in The Computer-Science Bubble Is Bursting. "For years, young people seeking a lucrative career were urged to go all in on computer science. From 2005 to 2023, the number of comp-sci majors in the United States quadrupled. All of which makes the latest batch of numbers so startling. This year, enrollment grew by only 0.2 percent nationally, and at many programs, it appears to already be in decline, according to interviews with professors and department chairs. At Stanford, widely considered one of the country's top programs, the number of comp-sci majors has stalled after years of blistering growth. Szymon Rusinkiewicz, the chair of Princeton's computer-science department, told me that, if current trends hold, the cohort of graduating comp-sci majors at Princeton is set to be 25 percent smaller in two years than it is today. The number of Duke students enrolled in introductory computer-science courses has dropped about 20 percent over the past year."

"But if the decline is surprising, the reason for it is fairly straightforward: Young people are responding to a grim job outlook for entry-level coders. In recent years, the tech industry has been roiled by layoffs and hiring freezes. The leading culprit for the slowdown is technology itself. Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words. This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true."

Meanwhile, writing in the Communications of the ACM, Orit Hazzan and Avi Salmon ask: Should Universities Raise or Lower Admission Requirements for CS Programs in the Age of GenAI? "This debate raises a key dilemma: should universities raise admission standards for computer science programs to ensure that only highly skilled problem-solvers enter the field, lower them to fill the gaps left by those who now see computer science as obsolete due to GenAI, or restructure them to attract excellent candidates with diverse skill sets who may not have considered computer science prior to the rise of GenAI, but who now, with the intensive GenAI and vibe coding tools supporting programming tasks, may consider entering the field?
Python

Behind the Scenes at the Python Software Foundation (python.org) 11

The Python Software Foundation ("made up of, governed, and led by the community") does more than just host Python and its documnation, the Python Package Repository, and the development workflows of core CPython developers. This week the PSF released its 28-page Annual Impact Report this week, noting that 2024 was their first year with three CPython developers-in-residence — and "Between Lukasz, Petr, and Serhiy, over 750 pull requests were authored, and another 1,500 pull requests by other authors were reviewed and merged." Lukasz Langa co-implemented the new colorful shell included in Python 3.13, along with Pablo Galindo Salgado, Emily Morehouse-Valcarcel, and Lysandros Nikolaou.... Code-wise, some of the most interesting contributions by Petr Viktorin were around the ctypes module that allows interaction between Python and C.... These are just a few of Serhiy Storchaka's many contributions in 2024: improving error messages for strings, bytes, and bytearrays; reworking support for var-arguments in the C argument handling generator called "Argument Clinic"; fixing memory leaks in regular expressions; raising the limits for Python integers on 64-bit platforms; adding support for arbitrary code page encodings on Windows; improving complex and fraction number support...

Thanks to the investment of [the OpenSSF's security project] Alpha-Omega in 2024, our Security Developer-in-Residence, Seth Larson, continued his work improving the security posture of CPython and the ecosystem of Python packages. Python continues to be an open source security leader, evident by the Linux kernel becoming a CVE Numbering Authority using our guide as well as our publication of a new implementers guide for Trusted Publishers used by Ruby, Crates.io, and Nuget. Python was also recommended as a memory-safe programming language in early 2024 by the White House and CISA following our response to the Office of the National Cyber Directory Request for Information on open source security in 2023... Due to the increasing demand for SBOMs, Seth has taken the initiative to generate SBOM documents for the CPython runtime and all its dependencies, which are now available on python.org/downloads. Seth has also started work on standardizing SBOM documents for Python packages with PEP 770, aiming to solve the "Phantom Dependency" problem and accurately represent non-Python software included in Python packages.

With the continued investment in 2024 by Amazon Web Services Open Source and Georgetown CSET for this critical role, our PyPI Safety & Security Engineer, Mike Fiedler, completed his first full calendar year at the PSF... In March 2024, Mike added a "Report project as malware" button on the website, creating more structure to inbound reports and decreasing remediation time. This new button has been used over 2,000 times! The large spike in June led to prohibiting Outlook email domains, and the spike in November was driven by a persistent attack. Mike developed the ability to place projects in quarantine pending further investigation. Thanks to a grant from Alpha-Omega, Mike will continue his work for a second year. We plan to do more work on minimizing time-on-PyPI for malware in 2025...

In 2024, PyPI saw an 84% growth in download counts and 48% growth in bandwidth, serving 526,072,569,160 downloads for the 610,131 projects hosted there, requiring 1.11 Exabytes of data transfer, or 281.6 Gbps of bandwidth 24x7x365. In 2024, 97k new projects, 1.2 million new releases, and 3.1 million new files were uploaded to the index.

Stats

RedMonk Ranks Top Programming Languages Over Time - and Considers Ditching Its 'Stack Overflow' Metric (redmonk.com) 40

The developer-focused analyst firm RedMonk releases twice-a-year rankings of programming language popularity. This week they also released a handy graph showing the movement of top 20 languages since 2012. Their current rankings for programming language popularity...

1. JavaScript
2. Python
3. Java
4. PHP
5. C#
6. TypeScript
7. CSS
8. C++
9. Ruby
10. C

The chart shows that over the years the rankings really haven't changed much (other than a surge for TypeScript and Python, plus a drop for Ruby). JavaScript has consistently been #1 (except in two early rankings, where it came in behind Java). And in 2020 Java finally slipped from #2 down to #3, falling behind... Python. Python had already overtaken PHP for the #3 spot in 2017, pushing PHP to a steady #4. C# has maintained the #5 spot since 2014 (though with close competition from both C++ and CSS). And since 2021 the next four spots have been held by Ruby, C, Swift, and R.

The only change in the current top 20 since the last ranking "is Dart dropping from a tie with Rust at 19 into sole possession of 20," writes RedMonk co-founder Stephen O'Grady. "In the decade and a half that we have been ranking these languages, this is by far the least movement within the top 20 that we have seen. While this is to some degree attributable to a general stasis that has settled over the rankings in recent years, the extraordinary lack of movement is likely also in part a manifestation of Stack Overflow's decline in query volume..." The arrival of AI has had a significant and accelerating impact on Stack Overflow, which comprises one half of the data used to both plot and rank languages twice a year... Stack Overflow's value from an observational standpoint is not what it once was, and that has a tangible impact, as we'll see....

As that long time developer site sees fewer questions, it becomes less impactful in terms of driving volatility on its half of the rankings axis, and potentially less suggestive of trends moving forward... [W]e're not yet at a point where Stack Overflow's role in our rankings has been deprecated, but the conversations at least are happening behind the scenes.

"The veracity of the Stack Overflow data is increasingly questionable," writes RedMonk's research director: When we use Stack Overflow for programming language rankings we measure how many questions are asked using specific programming language tags... While other pieces, like Matt Asay's AI didn't kill Stack Overflow are right to point out that the decline existed before the advent of AI coding assistants, it is clear that the usage dramatically decreased post 2023 when ChatGPT became widely available. The number of questions asked are now about 10% what they were at Stack Overflow's peak.
"RedMonk is continuing to evaluate the quality of this analysis," the research director concludes, arguing "there is value in long-lived data, and seeing trends move over a decade is interesting and worthwhile. On the other hand, at this point half of the data feeding the programming language rankings is increasingly stale and of questionable value on a going-forward basis, and there is as of now no replacement public data set available.

"We'll continue to watch and advise you all on what we see with Stack Overflow's data."
Microsoft

Is 'Minecraft' a Better Way to Teach Programming in the Age of AI? (edsurge.com) 58

The education-news site EdSurge published "sponsored content" from Minecraft Education this month. "Students light up when they create something meaningful," the article begins. "Self-expression fuels learning, and creativity lies at the heart of the human experience."

But they also argue that "As AI rapidly reshapes software development, computer science education must move beyond syntax drills and algorithmic repetition." Students "must also learn to think systemically..." As AI automates many of the mechanical aspects of programming, the value of CS education is shifting, from writing perfect code to shaping systems, telling stories through logic and designing ethical, human-centered solutions... [I]t's critical to offer computer science experiences that foster invention, expression and design. This isn't just an education issue — it's a workforce one. Creativity now ranks among the top skills employers seek, alongside analytical thinking and AI literacy. As automation reshapes the job market, McKinsey estimates up to 375 million workers may need to change occupations by 2030. The takeaway? We need more adaptable, creative thinkers.

Creative coding, where programming becomes a medium for self-expression and innovation, offers a promising solution to this disconnect. By positioning code as a creative tool, educators can tap into students' intrinsic motivation while simultaneously building computational thinking skills. This approach helps students see themselves as creators, not just consumers, of technology. It aligns with digital literacy frameworks that emphasize critical evaluation, meaningful contribution and not just technical skills.

One example of creative coding comes from a curriculum that introduces computer science through game design and storytelling in Minecraft... Developed by Urban Arts in collaboration with Minecraft Education, the program offers middle school teachers professional development, ongoing coaching and a 72-session curriculum built around game-based instruction. Designed for grades 6-8, the project-based program is beginner-friendly; no prior programming experience is required for teachers or students. It blends storytelling, collaborative design and foundational programming skills with a focus on creativity and equity.... Students use Minecraft to build interactive narratives and simulations, developing computational thinking and creative design... Early results are promising: 93 percent of surveyed teachers found the Creative Coders program engaging and effective, noting gains in problem-solving, storytelling and coding, as well as growth in critical thinking, creativity and resilience.

As AI tools like GitHub Copilot become standard in development workflows, the definition of programming proficiency is evolving. Skills like prompt engineering, systems thinking and ethical oversight are rising in importance, precisely what creative coding develops... As AI continues to automate routine tasks, students must be able to guide systems, understand logic and collaborate with intelligent tools. Creative coding introduces these capabilities in ways that are accessible, culturally relevant and engaging for today's learners.

Some background from long-time Slashdot reader theodp: The Urban Arts and Microsoft Creative Coders program touted by EdSurge in its advertorial was funded by a $4 million Education Innovation and Research grant that was awarded to Urban Arts in 2023 by the U.S. Education Department "to create an engaging, game-based, middle school CS course using Minecraft tools" for 3,450 middle schoolers (6th-8th grades)" in New York and California (Urban Arts credited Minecraft for helping craft the winning proposal)... New York City is a Minecraft Education believer — the Mayor's Office of Media and Entertainment recently kicked off summer with the inaugural NYC Video Game Festival, which included the annual citywide Minecraft Education Battle of the Boroughs Esports Competition in partnership with NYC Public Schools.
AI

CEOs Have Started Warning: AI is Coming For Your Job (yahoo.com) 124

It's not just Amazon's CEO predicting AI will lower their headcount. "Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job," reports the Washington Post — including IBM, Salesforce, and JPMorgan Chase.

But are they really just trying to impress their shareholders? Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries.... CEOs are under pressure to show they are embracing new technology and getting results — incentivizing attention-grabbing predictions that can create additional uncertainty for workers. "It's a message to shareholders and board members as much as it is to employees," Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. "You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different."

Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings — in line with their interest in promoting AI's power...

IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI "agents" for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year.... Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company...

Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. "We have little evidence of layoffs so far," said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. "What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business." Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard... It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. "Usage does not necessarily translate into value," he said. "Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?"

Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. "Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction ... or because humans are more productive."

On an earnings call, Salesforce's chief operating and financial officer said AI agents helped them reduce hiring needs — and saved $50 million, according to the article. (And Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, adds that if advanced tools like AI agents can prove their reliability and automate work — that could become a larger disruptor to jobs.) "A wave of disruption is going to happen," he's quoted as saying.

But while the debate continues about whether AI will eliminate or create jobs, Mollick still hedges that "the truth is probably somewhere in between."
AI

Anthropic Deploys Multiple Claude Agents for 'Research' Tool - Says Coding is Less Parallelizable (anthropic.com) 4

In April Anthorpic introduced a new AI trick: multiple Claude agents combine for a "Research" feature that can "search across both your internal work context and the web" (as well as Google Workspace "and any integrations...")

But a recent Anthropic blog post notes this feature "involves an agent that plans a research process based on user queries, and then uses tools to create parallel agents that search for information simultaneously," which brings challenges "in agent coordination, evaluation, and reliability.... The model must operate autonomously for many turns, making decisions about which directions to pursue based on intermediate findings." Multi-agent systems work mainly because they help spend enough tokens to solve the problem.... This finding validates our architecture that distributes work across agents with separate context windows to add more capacity for parallel reasoning. The latest Claude models act as large efficiency multipliers on token use, as upgrading to Claude Sonnet 4 is a larger performance gain than doubling the token budget on Claude Sonnet 3.7. Multi-agent architectures effectively scale token usage for tasks that exceed the limits of single agents.

There is a downside: in practice, these architectures burn through tokens fast. In our data, agents typically use about 4Ã-- more tokens than chat interactions, and multi-agent systems use about 15Ã-- more tokens than chats. For economic viability, multi-agent systems require tasks where the value of the task is high enough to pay for the increased performance. Further, some domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.

For instance, most coding tasks involve fewer truly parallelizable tasks than research, and LLM agents are not yet great at coordinating and delegating to other agents in real time. We've found that multi-agent systems excel at valuable tasks that involve heavy parallelization, information that exceeds single context windows, and interfacing with numerous complex tools.

Thanks to Slashdot reader ZipNada for sharing the news.
Science

Axolotl Discovery Brings Us Closer Than Ever To Regrowing Human Limbs (sciencealert.com) 41

alternative_right shares a report from ScienceAlert: A team of biologists from Northeastern University and the University of Kentucky has found one of the key molecules involved in axolotl regeneration. It's a crucial component in ensuring the body grows back the right parts in the right spot: for instance, growing a hand, from the wrist. "The cells can interpret this cue to say, 'I'm at the elbow, and then I'm going to grow back the hand' or 'I'm at the shoulder... so I'm going to then enable those cells to grow back the entire limb'," biologist James Monaghan explains.

That molecule, retinoic acid, is arranged through the axolotl body in a gradient, signaling to regenerative cells how far down the limb has been severed. Closer to the shoulder, axolotls have higher levels of retinoic acid, and lower levels of the enzyme that breaks it down. This ratio changes the further the limb extends from the body. The team found this balance between retinoic acid and the enzyme that breaks it down plays a crucial role in 'programming' the cluster of regenerative cells that form at an injury site. When they added surplus retinoic acid to the hand of an axolotl in the process of regenerating, it grew an entire arm instead.

In theory, the human body has the right molecules and cells to do this too, but our cells respond to the signals very differently, instead forming collagen-based scars at injury sites. Next, Monaghan is keen to find out what's going on inside cells -- the axolotl's, and our own -- when those retinoic acid signals are received.
The research is published in Nature Communications.
AI

How Do Olympiad Medalists Judge LLMs in Competitive Programming? 23

A new benchmark assembled by a team of International Olympiad medalists suggests the hype about large language models beating elite human coders is premature. LiveCodeBench Pro, unveiled in a 584-problem study [PDF] drawn from Codeforces, ICPC and IOI contests, shows the best frontier model clears just 53% of medium-difficulty tasks on its first attempt and none of the hard ones, while grandmaster-level humans routinely solve at least some of those highest-tier problems.

The researchers measured models and humans on the same Elo scale used by Codeforces and found that OpenAI's o4-mini-high, when stripped of terminal tools and limited to one try per task, lands at an Elo rating of 2,116 -- hundreds of points below the grandmaster cutoff and roughly the 1.5 percentile among human contestants. A granular tag-by-tag autopsy identified implementation-friendly, knowledge-heavy problems -- segment trees, graph templates, classic dynamic programming -- as the models' comfort zone; observation-driven puzzles such as game-theory endgames and trick-greedy constructs remain stubborn roadblocks.

Because the dataset is harvested in real time as contests conclude, the authors argue it minimizes training-data leakage and offers a moving target for future systems. The broader takeaway is that impressive leaderboard jumps often reflect tool use, multiple retries or easier benchmarks rather than genuine algorithmic reasoning, leaving a conspicuous gap between today's models and top human problem-solvers.
Programming

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift (infoq.com) 109

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage."

This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency."

Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

"While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."

Slashdot Top Deals