AI

Copilot Can't Beat a 2013 'TouchDevelop' Code Generation Demo for Windows Phone 18

What happens when you ask Copilot to "write a program that can be run on an iPhone 16 to select 15 random photos from the phone, tint them to random colors, and display the photos on the phone"?

That's what TouchDevelop did for the long-discontinued Windows Phone in a 2013 Microsoft Research 'SmartSynth' natural language code generation demo. ("Write scripts by tapping on the screen.")

Long-time Slashdot reader theodp reports on what happens when, 14 years later, you pose the same question to Copilot: "You'll get lots of code and caveats from Copilot, but nothing that you can execute as is. (Compare that to the functioning 10 lines of code TouchDevelop program). It's a good reminder that just because GenAI can generate code, it doesn't necessarily mean it will generate the least amount of code, the most understandable or appropriate code for the requestor, or code that runs unchanged and produces the desired results.
theodp also reminds us that TouchDevelop "was (like BASIC) abandoned by Microsoft..." Interestingly, a Microsoft Research video from CS Education Week 2011 shows enthusiastic Washington high school students participating in an hour-long TouchDevelop coding lesson and demonstrating the apps they created that tapped into music, photos, the Internet, and yes, even their phone's functionality. This shows how lacking iPhone and Android still are today as far as easy programmability-for-the-masses goes. (When asked, Copilot replied that Apple's Shortcuts app wasn't up to the task).
Programming

How Rust Finally Got a Specification - Thanks to a Consultancy's Open-Source Donation (rustfoundation.org) 16

As Rust approaches its 10th anniversary, "there is an important piece of documentation missing that many other languages provide," notes the Rust Foundation.

While there's documentation and tutorials — there's no official language specification: In December 2022, an RFC was submitted to encourage the Rust Project to begin working on a specification. After much discussion, the RFC was approved in July 2023, and work began.

Initially, the Rust Project specification team (t-spec) were interested in creating the document from scratch using the Rust Reference as a guiding marker. However, the team knew there was already an external Rust specification that was being used successfully for compiler qualification purposes — the FLS.

Thank Berlin-based Ferrous Systems, a Rust-based consultancy who assembled that description "some years ago," according to a post on the Rust blog: They've since been faithfully maintaining and updating this document for new versions of Rust, and they've successfully used it to qualify toolchains based on Rust for use in safety-critical industries. [The Rust Foundation notes it part of the consultancy's "Ferrocene" Rust compiler/toolchain.] Seeing this success, others have also begun to rely on the FLS for their own qualification efforts when building with Rust.
The Rust Foundation explains: The FLS provides a structured and detailed reference for Rust's syntax, semantics, and behavior, serving as a foundation for verification, compliance, and standardization efforts. Since Rust did not have an official language specification back then, nor a plan to write one, the FLS represented a major step toward describing Rust in a way that aligns with industry requirements, particularly in high-assurance domains.
And the Rust Project is "passionate about shipping high quality tools that enable people to build reliable software at scale," adds the Rust blog. So... It's in that light that we're pleased to announce that we'll be adopting the FLS into the Rust Project as part of our ongoing specification efforts. This adoption is being made possible by the gracious donation of the FLS by Ferrous Systems. We're grateful to them for the work they've done in assembling the FLS, in making it fit for qualification purposes, in promoting its use and the use of Rust generally in safety-critical industries, and now, for working with us to take the next step and to bring the FLS into the Project.

With this adoption, we look forward to better integrating the FLS with the processes of the Project and to providing ongoing and increased assurances to all those who use Rust in safety-critical industries and, in particular, to those who use the FLS as part of their qualification efforts.

More from the Rust Foundation: The t-spec team wanted to avoid potential confusion from having two highly visible Rust specifications in the industry and so decided it would be worthwhile to try to integrate the FLS with the Rust Reference to create the official Rust Project specification. They approached Ferrous Systems, which agreed to contribute its FLS to the Rust Project and allow the Rust Project to take over its development and management... This generous donation will provide a clearer path to delivering an official Rust specification. It will also empower the Rust Project to oversee its ongoing evolution, providing confidence to companies and individuals already relying on the FLS, and marking a major milestone for the Rust ecosystem.

"I really appreciate Ferrous taking this step to provide their specification to the Rust Project," said Joel Marcey, Director of Technology at the Rust Foundation and member of the t-spec team. "They have already done a massive amount of legwork...." This effort will provide others who require a Rust specification with an official, authoritative reference for their work with the Rust programming language... This is an exciting outcome. A heartfelt thank you to the Ferrous Systems team for their invaluable contribution!

Marcey said the move allows the team "to supercharge our progress in the delivery of an official Rust specification."

And the co-founder of Ferrous Systems, Felix Gilcher, also sounded excited. "We originally created the Ferrocene Language Specification to provide a structured and reliable description of Rust for the certification of the Ferrocene compiler. As an open source-first company, contributing the FLS to the Rust Project is a logical step toward fostering the development of a unified, community-driven specification that benefits all Rust users."
Cloud

Microsoft Announces 'Hyperlight Wasm': Speedy VM-Based Security at Scale with a WebAssembly Runtime (microsoft.com) 18

Cloud providers like the security of running things in virtual machines "at scale" — even though VMs "are not known for having fast cold starts or a small footprint..." noted Microsoft's Open Source blog last November. So Microsoft's Azure Core Upstream team built an open source Rust library called Hyperlight "to execute functions as fast as possible while isolating those functions within a VM."

But that was just the beginning... Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine "micro-guest" that can run wasm component workloads written in many programming languages...

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn't do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory...

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they're just compiling for the wasm32-wasip2 target... Executing workloads in the Hyperlight Wasm guest isn't just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image... Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that's a good thing.

While a traditional virtual-device-based VM takes about 125 milliseconds to load, "When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future."

And there's also double security due to Wasmtime's software-defined runtime sandbox within Hyperlight's larger VM...
Programming

DOGE To Rewrite SSA Codebase In 'Months' (wired.com) 338

Longtime Slashdot reader frank_adrian314159 writes: According to an article in Wired, Elon Musk has appointed a team of technologists from DOGE to "rewrite the code that runs the SSA in months." This codebase has over 60 million lines of COBOL and handles record keeping for all American workers and payments for all Social Security recipients. Given that the code has to track the byzantine regulations dealing with Social Security, it's no wonder that the codebase is this large. What is in question though is whether a small team can rewrite this code "in months." After all, what could possibly go wrong? "The project is being organized by Elon Musk lieutenant Steve Davis ... and aims to migrate all SSA systems off COBOL ... and onto a more modern replacement like Java within a scheduled tight timeframe of a few months," notes Wired.

"Under any circumstances, a migration of this size and scale would be a massive undertaking, experts tell WIRED, but the expedited deadline runs the risk of obstructing payments to the more than 65 million people in the US currently receiving Social Security benefits."

In 2017, SSA announced a plan to modernize its core systems with a timeline of around five years. However, the work was "pivoted away" because of the pandemic.
Operating Systems

Linux Kernel 6.14 Is a Big Leap Forward In Performance, Windows Compatibility (zdnet.com) 34

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: Despite the minor delay, Linux 6.14 arrives packed with cutting-edge features and improvements to power upcoming Linux distributions, such as the forthcoming Ubuntu 25.04 and Fedora 42. The big news for desktop users is the improved NTSYNC driver, especially those who like to play Windows games or run Windows programs on Linux. This driver is designed to emulate Windows NT synchronization primitives. What that feature means for you and me is that it will significantly improve the performance of Windows programs running on Wine and Steam Play. [...] Gamers always want the best possible graphics performance, so they'll also be happy to see that Linux now supports recently launched AMD RDNA 4 graphics cards. This approach includes support for the AMD Radeon RX 9070 XT and RX 9070 graphics cards. Combine this support with the recently improved open-source RADV driver and AMD gamers should see the best speed yet on their gaming rigs.

Of course, the release is not just for gamers. Linux 6.14 also includes several AMD and Intel processor enhancements. These boosts focus on power management, thermal control, and compute performance optimizations. These updates are expected to improve overall system efficiency and performance. This release also comes with the AMDXDNA driver, which provides official support for AMD's neural processing units based on the XDNA architecture. This integration enables efficient execution of AI workloads, such as convolutional neural networks and large language models, directly on supported AMD hardware. While Rust has faced some difficulties in recent months in Linux, more Rust programming language abstractions have been integrated into the kernel, laying the groundwork for future drivers written in Rust. [...] Besides drivers, Miguel Ojeda, Rust for Linux's lead developer, said recently that the introduction of the macro for smart pointers with Rust 1.84: derive(CoercePointee) is an "important milestone on the way to building a kernel that only uses stable Rust functions." This approach will also make integrating C and Rust code easier. We're getting much closer to Rust being grafted into Linux's tree.

In addition, Linux 6.14 supports Qualcomm's latest Snapdragon 8 Elite mobile processor, enhancing performance and stability for devices powered by this chipset. That support means you can expect to see much faster Android-based smartphones later this year. This release includes a patch for the so-called GhostWrite vulnerability, which can be used to root some RISC-V processors. This fix will block such attacks. Additionally, Linux 6.14 includes improvements for the copy-on-write Btrfs file system/logical volume manager. These primarily read-balancing methods offer flexibility for different RAID hardware configurations and workloads. Additionally, support for uncached buffered I/O optimizes memory usage on systems with fast storage devices.
Linux 6.14 is available for download here.
AI

OpenAI CEO Altman Says AI Will Lead To Fewer Software Engineers (stratechery.com) 163

OpenAI CEO Sam Altman believes companies will eventually need fewer software engineers as AI continues to transform programming. "Each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers," Altman told Stratechery.

AI now handles over 50% of code authorship in many companies, Altman estimated, a significant shift that's happened rapidly as large language models have improved. The real paradigm shift is still coming, he said. "The big thing I think will come with agentic coding, which no one's doing for real yet," Altman said, suggesting that the next breakthrough will be AI systems that can independently tackle larger programming tasks with minimal human guidance.

While OpenAI continues hiring engineers for now, Altman recommended that high school graduates entering the workforce "get really good at using AI tools," calling it the modern equivalent of learning to code. "When I was graduating as a senior from high school, the obvious tactical thing was get really good at coding. And this is the new version of that," he said.
AI

How AI Coding Assistants Could Be Compromised Via Rules File (scworld.com) 31

Slashdot reader spatwei shared this report from the cybersecurity site SC World: : AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pillar Security researchers reported Tuesday.

Rules files are used by AI coding agents to guide their behavior when generating or editing code. For example, a rules file may include instructions for the assistant to follow certain coding best practices, utilize specific formatting, or output responses in a specific language.

The attack technique developed by Pillar Researchers, which they call 'Rules File Backdoor,' weaponizes rules files by injecting them with instructions that are invisible to a human user but readable by the AI agent.

Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted.

Rules configurations are often shared among developer communities and distributed through open-source repositories or included in project templates; therefore, an attacker could distribute a malicious rules file by sharing it on a forum, publishing it on an open-source platform like GitHub or injecting it via a pull request to a popular repository.

Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker's instructions while assisting the victim's future coding projects.

Programming

US Programming Jobs Plunge 27.5% in Two Years (msn.com) 104

Computer programming jobs in the US have declined by more than a quarter over the past two years, placing the profession among the 10 hardest-hit occupations of 420-plus jobs tracked by the Bureau of Labor Statistics and potentially signaling the first concrete evidence of artificial intelligence replacing workers.

The timing coincides with OpenAI's release of ChatGPT in late 2022. Anthropic researchers found people use AI to perform programming tasks more than those of any other job, though 57 percent of users employ AI to augment rather than automate work. "Without getting hysterical, the unemployment jump for programming really does look at least partly like an early, visible labor market effect of AI," said Mark Muro of the Brookings Institution.

While software developer positions have remained stable with only a 0.3 percent decline, programmers who perform more routine coding from specifications provided by others have seen their ranks diminish to levels not seen since 1980. Economists caution that high interest rates and post-pandemic tech industry contraction have also contributed to the decline in programming jobs, which typically pay $99,700 compared to $132,270 for developers.
Programming

'Vibe Coding' is Letting 10 Engineers Do the Work of a Team of 50 To 100, Says YC CEO (businessinsider.com) 159

Y Combinator CEO Garry Tan said startups are reaching $1-10 million annual revenue with fewer than 10 employees due to "vibe coding," a term coined by OpenAI cofounder Andrej Karpathy in February.

"You can just talk to the large language models and they will code entire apps," Tan told CNBC (video). "You don't have to hire someone to do it, you just talk directly to the large language model that wrote it and it'll fix it for you." What would've once taken "50 or 100" engineers to build, he believes can now be accomplished by a team of 10, "when they are fully vibe coders." He adds: "When they are actually really, really good at using the cutting edge tools for code gen today, like Cursor or Windsurf, they will literally do the work of 10 or 100 engineers in the course of a single day."

According to Tan, 81% of Y Combinator's current startup batch consists of AI companies, with 25% having 95% of their code written by large language models. Despite limitations in debugging capabilities, Tan said the technology enables small teams to perform work previously requiring dozens of engineers and makes previously overlooked niche markets viable for software businesses.
Advertising

Roku Tests Autoplaying Ads Loading Before the Home Screen 59

Roku is testing autoplaying video ads that play before users can access the home screen. While Roku claims this is just an experiment, users are threatening to abandon the platform if the change becomes permanent. Ars Technica reports: Reports of Roku customers seeing video ads automatically play before they could view the OS' home screen started appearing online this week. A Reddit user, for example, posted yesterday: "I just turned on my Roku and got an unskippable ad for a movie, before I got to the regular Roku home screen." Multiple apparent users reported seeing an ad for the movie Moana 2. When reached for comment, a Roku spokesperson shared a company statement that confirms that the autoplaying ads are expected behavior but not a permanent part of Roku OS currently. Instead, Roku claimed, it was just trying the ad capability out.

Roku's representative said that Roku's business "has and will always require continuous testing and innovation across design, navigation, content, and our first-rate advertising products," adding: "Our recent test is just the latest example, as we explore new ways to showcase brands and programming while still providing a delightful and simple user experience."
Programming

Sobering Revenue Stats of 70K Mobile Apps Show Why Devs Beg For Subscriptions (arstechnica.com) 50

Most mobile apps fail to reach $1,000 in monthly revenue within their first two years, according to a new report from RevenueCat examining data from over 75,000 mobile apps. Across all categories, only about 20% of apps achieve the $1,000 threshold, while just 5% reach $10,000 monthly.

In 2025, the top 5% of apps generate 500 times more revenue than the remaining 95% -- up from 200 times in 2024. After one year, elite performers in gaming, photo and video, health and fitness, and social categories exceed $5,000 monthly, while those in the 25th percentile earn a meager $5-20 per month. The report also highlights North American developers' heavy iOS dependence, with 76.1% making over 80% of their revenue from Apple's platform. Subscription retention presents another challenge, with barely 10% of monthly subscribers staying beyond the first year.
Programming

Why Microsoft's Developers are Porting TypeScript to Go (infoworld.com) 49

Tuesday Microsoft "surprised everyone," writes Neowin, "by announcing a new change that will radically improve TypeScript performance" — porting TypeScript to Go.

InfoWorld writes that "The initiative promises dramatic improvements in editor startup speed, build times, and memory usage, making it easier to scale TypeScript to large code bases, Microsoft said." Microsoft's TypeScript team expects to be able to preview command-line type-checking in Go-based tsc by mid-2025, and to deliver a feature-complete Go implementation of TypeScript by the end of the year. [You can build and run the Go code now from Microsoft's new working repository.] Developers who use Go-based TypeScript in the Visual Studio Code editor will feel the increased speed in the editor, Microsoft said. The company promises an 8x improvement in project load times, instant comprehensive error listings across entire projects, and greater responsiveness for all language service operations including completion lists, quick information, go to definition, and find all references. The new TypeScript will also support more advanced refactoring and deeper insights that were previously too expensive to compute, the company said.
Microsoft believes native Go implementations reduce build times by up to 10x, notes Neowin. But "Developers can expect TypeScript 6.0 to have some deprecations and breaking changes to support the upcoming Go-based version." Later this year, Microsoft will be releasing this new native Go implementation as TypeScript 7.0. The current JS-based TypeScript codebase will continue development into the 6.x series until TypeScript 7+ reaches sufficient maturity and adoption, since some projects may depend on certain API features, legacy configurations, or other things that are not supported by TypeScript 7+.
TypeScript's original creator Anders Hejlsberg recorded an announcement video — and also shared his thoughts in a GitHub discussion titled simply... "Why Go?" The TypeScript compiler's move to Go was influenced by specific technical requirements, such as the need for structural compatibility with the existing JavaScript-based codebase, ease of memory management, and the ability to handle complex graph processing efficiently. After evaluating numerous languages and making multiple prototypes — including in C# — Go emerged as the optimal choice...

Let's be real. Microsoft using Go to write a compiler for TypeScript wouldn't have been possible or conceivable in years past. However, over the last few decades, we've seen Microsoft's strong and ongoing commitment to open-source software, prioritizing developer productivity and community collaboration above all. Our goal is to empower developers with the best tools available, unencumbered by internal politics or narrow constraints. This freedom to choose the right tool for each specific job ultimately benefits the entire developer community, driving innovation, efficiency, and improved outcomes. And you can't argue with a 10x outcome!

Hejlsberg also addressed their choice of Go in an online interview with the Michigan TypeScript meetup.
AI

JPMorgan Engineers' Efficiency Jumps as Much as 20% From Using Coding Assistant (reuters.com) 32

Tens of thousands of JPMorgan Chase software engineers increased their productivity 10% to 20% by using a coding assistant tool developed by the bank, its global chief information officer Lori Beer said. From a report: The gains present "a great opportunity" for the lender to assign its engineers to other projects, Beer told Reuters ahead of DevUp, an internal conference hosted by JPMorgan, bringing together its top engineers in India this year. The largest lender in the U.S. had a technology budget of $17 billion for 2024. Its tech workforce of 63,000 employees, with a third of them based in India, represents about 21% of its global headcount. The efficiency gains from the coding assistant will also allow JPMorgan's engineers to devote more time to high-value projects focusing on artificial intelligence and data, Beer said.
AI

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 96

An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.

Programming

IBM CEO Doesn't Think AI Will Replace Programmers Anytime Soon (techcrunch.com) 58

IBM CEO Arvind Krishna has publicly disagreed with Anthropic CEO Dario Amodei's prediction that AI will write 90% of code within 3-6 months, estimating instead that only "20-30% of code could get written by AI."

"Are there some really simple use cases? Yes, but there's an equally complicated number of ones where it's going to be zero," Krishna said during an onstage interview at SXSW. He argued AI will boost programmer productivity rather than eliminate jobs. "If you can do 30% more code with the same number of people, are you going to get more code written or less?" he asked. "History has shown that the most productive company gains market share, and then you can produce more products."
Programming

Developer Convicted For 'Kill Switch' Code Activated Upon His Termination (arstechnica.com) 88

A 55-year-old software developer faces up to 10 years in prison after being convicted for deploying malicious code that sabotaged his former employer's network, causing hundreds of thousands of dollars in losses.

Davis Lu was convicted by a jury for causing intentional damage to protected computers owned by power management company Eaton Corp., the US Department of Justice announced Friday. Lu, who worked at Eaton for 11 years, became disgruntled after a 2018 corporate "realignment" reduced his responsibilities.

He created malicious code that deleted coworker profile files, prevented logins, and caused system crashes. His most destructive creation was a "kill switch" named "IsDLEnabledinAD" that automatically activated upon his termination in 2019, disrupting Eaton's global operations. Lu admitted to creating some malicious code but plans to appeal the verdict.
AI

Adafruit Successfully Automates Arduino Development Using 'Claude Code' LLM (youtube.com) 22

Adafruit Industries used large language model (LLM) tool Claude Code to streamline hardware development, writes managing director ptorrone.

In a demo video Limor 'Ladyada' Fried compares the LLM's command-line interface to working with the build-automation tool CMake or "a weird cross between IRC and a BBS." The first step was converting a PDF of the hardware's datasheet into text, and Claude Code first displays the appropriate Bash command, while asking "Do you want to proceed?" ("What's nice is that it doesn't make changes, even though it has write access to files in the directory...") Eventually from the data sheet it creates things like an accurate register map, C++ headers, and even license text — and more.

"We are using it to automate parts of the coding and debugging process for an Arduino-compatible Metro Mini board with an OPT 4048 color sensor," writes ptorrone: Using Claude Code's shell access, we can compile, upload, and test code in a semi-automated workflow, allowing the LLM to suggest fixes for errors along the way... While the AI isn't perfect for high-level driver development, it's proving VERY useful for tedious debugging and super-fast iterative improvements, bringing hardware automation closer to ...reality.
In the video Fried describes it like this. "I have a full debugging cycle, where I'm there — I'm like driving the car — but I have this copilot that's telling me where to go..."

"I feel like I'm getting closer to having a semi-automated way of doing driver development."
Transportation

Can Ants Teach Us How to Program Self-Driving Cars? (scientificamerican.com) 63

gdm (Slashdot reader #97,336) writes: A study published in Transportation Research Interdisciplinary Perspectives investigates how ants avoid traffic jams.... Quoting the abstract: "The results show that ants adopt specific traffic strategies (platoon formation, quasi-constant speed and no overtaking maneuvers) that help avoid jam phenomena, even at high density." "Researchers are now studying these insects' cooperative tactics to learn how to program self-driving cars that don't jam up," writes Scientific American: "We're maximizing the interests of individuals, [which] is why, at a given point, you start to have a traffic jam," says study co-author Nicola Pugno, who studies sustainable engineering at the University of Trento in Italy. But self-driving cars, if they one day become ubiquitous, could have more cooperative programming. In one vision of this future, autonomous vehicles would share information with nearby cars to optimize traffic flow — perhaps, the researchers suggest, by prioritizing constant speeds and headways or by not passing others on the road...

Today's drivers can learn at least one thing from ants to avoid causing a traffic jam, says Katsuhiro Nishinari [a mathematical physicist at the University of Tokyo, who studies traffic]: don't tailgate. By leaving room between their car and the one ahead of them, drivers can absorb a wave of braking in dense traffic conditions that would otherwise be amplified into a full-blown "phantom" traffic jam with no obvious cause. "Just keeping away," he says, can help traffic flow smoothly.

In the article the researchers admit there are differences between humans stuck in traffic and ants. "Unlike cars, ants don't crash; they can literally walk over one another." And if they're backed up in a tunnel, "they'll find a way to walk on the ceiling!"

Television

Remembering 'Space Ghost' Voice Actor George Lowe (yahoo.com) 16

Long-time Slashdot reader invisik saw this story on Yahoo News: Comedian and voice actor George Lowe, who is well-known as the voice of Space Ghost on "Space Ghost Coast to Coast," died on March 2. He was 67...

He did some voice-over work for TBWS and Cartoon Network in the 1980s to mid-1990s before getting his lead role of Space Ghost in 1994 with the premiere of "Space Ghost Coast to Coast" on Cartoon Network. Space Ghost was a parody of talk shows with live-action celebrity guests, hosted by the Hanna Barbera character Space Ghost, which aired from 1994 to 1999 on Cartoon Network. The show later returned in 2001, airing on Adult Swim's late-night programming block until 2004, Deadline reported.

When animation pioneer William Hanna died in 2001, Slashdot founder CmdrTaco posted "the thing that I respect most about Hanna is the fact that a show like Space Ghost Coast to Coast was allowed to take their characters and do something truly unique with them. He even lent his voice to the show in one episode. Not a lot of people would be willing to allow one of their creations to be twisted like that, but the original Space Ghost was one of my childhood staples, and Space Ghost Coast to Coast stands in a class all its own proving that creativity isn't dead on TV."

"Adult Swim would not be the network it is today without Space Ghost Coast to Coast," argues ComicBook.com. (And as a tribute to Lowe, Adult Swim posted five minutes of surreal outtakes from Space Ghost Coast to Coast's 10th Anniversary celebration.)

A headline at Vulture.com makes the case that "Space Ghost Coast to Coast Only Worked Because of George Lowe." They've rounded up a collection of videos with surreal titles like "Marrying Bjork" and "Guesting on a MF DOOM track" (plus that time Lowe did a live interview — in his Space Ghost costume — with C-SPAN).
Programming

Can TrapC Fix C and C++ Memory Safety Issues? (infoworld.com) 99

"TrapC, a fork of the C language, is being developed as a potential solution for memory safety issues that have hindered the C and C++ languages," reports InfoWorld.

But also being developed is a compiler named trapc "intended to be implemented as a cybersecurity compiler for C and C++ code, said developer Robin Rowe..." Due by the end of this year, trapc will be a free, open source compiler similar to Clang... Rowe said.

TrapC has pointers that are memory-safe, addressing the memory safety issue with the two languages. With TrapC, developers write in C or C++ and compile in TrapC, for memory safety...

Rowe presented TrapC at an ISO C meeting this week. Developers can download a TrapC whitepaper and offer Rowe feedback. According to the whitepaper, TrapC's memory management is automatic and cannot leak memory. Pointers are lifetime-managed, not garbage-collected. Also, TrapC reuses a few code safety features from C++, notably member functions, constructors, destructors, and the new keyword.

"TrapC Memory Safe Pointers will not buffer overrun and will not segfault," Rowe told the ISO C Committee standards body meeting, according to the Register. "When C code is compiled using a TrapC compiler, all pointers become Memory Safe Pointers and are checked."

In short, TrapC "is a programming language forked from C, with changes to make it LangSec and Memory Safe," according to that white paper. "To accomplish that, TrapC seeks to eliminate all Undefined Behavior in the C programming language..."

"The startup TRASEC and the non-profit Fountain Abode have a TrapC compiler in development, called trapc," the whitepaper adds, and their mission is "to enable recompiling legacy C code into executables that are safe by design and secure by default, without needing much code refactoring... The TRASEC trapc cybersecurity compiler with AI code reasoning is expected to release as free open source software sometime in 2025."

In November the Register offered some background on the origins of TrapC...

Slashdot Top Deals