×
AI

ChatGPT-Powered 'Scalene' Offers Efficiency Suggestions for Python Programmers (itbrew.com) 36

The tech site IT Brew looks at an open-source tool that "uses AI to offer efficiency-minded suggestions to Python coders." Known as "Scalene," the profiler — a kind of debugger for performance issues — has been downloaded more than 900,000 times on GitHub. "It's awesome in general, and amazing for an academic project," UMass professor Emery Berger, who worked with PhD students Sam Stern and Juan Altmayer Pizzorno on the open-source tool, told IT Brew...

Scalene measures how much time and memory is spent on each line of code — both on average and at peak, [and] how much time is spent in efficient libraries and how much is spent in Python... By selecting a lightning-bolt icon, a user can "leverage the engine that powers ChatGPT to get an optimization" suggestion, Berger said. In one demo he showed IT Brew, an output recommended a less-memory-intensive move to reduce a very large array created by the code...

"If your Python code already runs fast enough, then you don't need a profiler. But if it's running slow, I think it's a very convenient profiler to reach for," Berger said.

Link via Dev News .
Oracle

Oracle's Plans for Java in 2024 (infoworld.com) 75

"Oracle's plans to evolve Java in 2024 involve OpenJDK projects," writes InfoWorld, citing a recent video by Oracle Java developer relations representative Nicolai Parlog. (Though many improvements may not be usable until 2025 or later...) - For Project Babylon, Parlog cited plans for code reflection, expanding the reflection API, and allowing transformation of Java code inside a method. The goal is to allow developers to write Java code that libraries then can interpret as a mathematical function, for example. The Babylon team in coming weeks plans to publish work on use cases such as auto-differentiating, C# LINQ emulation, and GPU programming.

- In Project Leyden, which is aimed at improving startup times, plans for 2024 involve refining the concept of condensers and working toward the production-readiness of prototype condensers.

- In Project Amber, current features in preview include string templates, a simplified main method, and statements before this() and super(). "I expect all three to finalize in 2024," said Parlog. Under exploration are capabilities such as primitive types in patterns and with expressions.

- In Project Valhalla, work will focus on value classes and objects, which provide class instances that have only final instance fields and lack object identity [to] significantly reduce the run time overhead of boxed Integer, Double, and Byte objects...

- In Project Lilliput, aimed at downsizing Java object headers in the HotSpot JVM and reducing Java's memory footprint, work now centers on polishing a fast-locking scheme.

- Project Panama, for interconnecting JVM and native C code, "has three irons in the fire," Parlog said.

Businesses

Netflix Buys Rights To WWE's 'Raw,' Its First Big Live Event (bloomberg.com) 80

Netflix has acquired the exclusive rights to Raw as well as other programming from World Wrestling Entertainment, marking the streaming service's first big move into live events. From a report: Raw will air on Netflix in the US, Canada, Latin America and other international markets beginning in January 2025, after the expiration of the WWE's domestic deal with Comcast. The company will also become the exclusive home outside the US for all WWE shows and specials, including Smackdown and NXT, as well as pay-per-view live events like Wrestlemania, SummerSlam and Royal Rumble. The pay-per-view events will be included at no additional cost for Netflix customers.

After attracting more than 200 million customers by offering films and TV shows on-demand, Netflix has now committed to offering three hours of live wrestling a week starting next year. The company hopes the deal will bring in millions of loyal WWE viewers and provide a boost for its fledgling advertising-supported plan. Netflix has been dabbling in live events for the last year, airing a live comedy special, as well as a golf match, but this is the first long-term rights deal. The WWE is the latest major live event to shift from cable TV to streaming. Ultimate Fighting Championship, which like WWE is owned by TKO Group Holdings, offers many of its matches on ESPN+, while the National Football League sold Amazon the rights to Thursday Night Football. A playoff game on Comcast's Peacock just delivered the largest streaming audience for any professional sports event in the US.
The deal 10-year deal is valued at more than $5 billion, CNBC reported.
Programming

NPM Users Download 2.1B Deprecated Packages Weekly, Say Security Researchers (scmagazine.com) 28

The cybersecurity site SC Media reports that NPM registry users "download deprecated packages an estimated 2.1 billion times weekly, according to a statistical analysis of the top 50,000 most-downloaded packages in the registry." Deprecated, archived and "orphaned" NPM packages can contain unpatched and/or unreported vulnerabilities that pose a risk to the projects that depend on them, warned the researchers from Aqua Security's Team Nautilus, who published their findings in a blog post on Sunday... In conjunction with their research, Aqua Nautilus has released an open-source tool that can help developers identify deprecated dependencies in their projects.

Open-source software may stop receiving updates for a variety of reasons, and it is up to developers/maintainers to communicate this maintenance status to users. As the researchers pointed out, not all developers are transparent about potential risks to users who download or depend on their outdated NPM packages. Aqua Nautilus researchers kicked off their analysis after finding that one open-source software maintainer responded to a report about a vulnerability Nautilus discovered by archiving the vulnerable repository the same day. By archiving the repository without fixing the security flaw or assigning it a CVE, the owner leaves developers of dependent projects in the dark about the risks, the researchers said...

Taking into consideration both deprecated packages and active packages that have a direct dependency on deprecated projects, the researchers found about 4,100 (8.2%) of the top 50,000 most-downloaded NPM packages fell under the category of "official" deprecation. However, adding archived repositories to the definition of "deprecated" increased the number of packages affected by deprecation and deprecated dependencies to 6,400 (12.8%)... Including packages with linked repositories that are shown as unavailable (404 error) on GitHub increases the deprecation rate to 15% (7,500 packages), according to the Nautilus analysis. Encompassing packages without any linked repository brings the final number of deprecated packages to 10,600, or 21.2% of the top 50,000. Team Nautilus estimated that under this broader understanding of package deprecation, about 2.1 billion downloads of deprecated packages are made on the NPM registry weekly.

Programming

Rust-Written Linux Scheduler Continues Showing Promising Results For Gaming (phoronix.com) 40

"A Canonical engineer has been experimenting with implementing a Linux scheduler within the Rust programming language..." Phoronix reported Monday, "that works via sched_ext for implementing a scheduler using eBPF that can be loaded during run-time."

The project was started "just for fun" over Christmas, according to a post on X by Canonical-based Linux kernel engineer Andrea Righi, adding "I'm pretty shocked to see that it doesn't just work, but it can even outperform the default Linux scheduler (EEVDF) with certain workloads (i.e., gaming)." Phoronix notes the a YouTube video accompanying the tweet shows "a game with the scx_rustland scheduler outperforming the default Linux kernel scheduler while running a parallel kernel build in the background."

"For sure the build takes longer," Righi acknowledged in a later post. "This scheduler doesn't magically makes everything run faster, it simply prioritizes more the interactive workloads vs CPU-intensive background jobs." Righi followed up by adding "And the whole point of this demo was to prove that, despite the overhead of running a scheduler in user-space, we can still achieve interesting performance, while having the advantages of being in user-space (ease of experimentation/testing, reboot-less updates, etc.)"

Wednesday Righi added some improvements, posting that "Only 19 lines of code (comments included) for ~2x performance improvement on SMT isn't bad... and I spent my lunch break playing Counter Strike 2 to test this patch..."

And work seems to be continuing, judging by a fresh post from Righi on Thursday. "I fixed virtme-ng to run inside Docker and used it to create a github CI workflow for sched-ext that clones the latest kernel, builds it and runs multiple VMs to test all the scx schedulers. And it does that in only ~20min. I'm pretty happy about virtme-ng now."
AI

Bill Gates Interviews Sam Altman, Who Predicts Fastest Tech Revolution 'By Far' (gatesnotes.com) 106

This week on his podcast Bill Gates asked Sam Altman how his team is doing after his (temporary) ouster, Altman replies "a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that's like a silver lining of all of this. In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us."

The rest of their conversation was pre-ouster — but gave fascinating glimpses at the possible future of AI — including the prospect of very speedy improvements. Altman suggests it will be easier to understand how a creative work gets "encoded" in an AI than it would be in a human brain. "There has been some very good work on interpretability, and I think there will be more over time... The little bits we do understand have, as you'd expect, been very helpful in improving these things. We're all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast...." BILL GATES: I'm pretty sure, within the next five years, we'll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we're able to do today.

SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what's going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.

BILL GATES: Yes, in physics, biology, it's sometimes just messing around, and it's like, whoa — how does this actually come together...? When you look at the next two years, what do you think some of the key milestones will be?

SAM ALTMAN: Multimodality will definitely be important.

BILL GATES: Which means speech in, speech out?

SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that.... [B]ut maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn't always know which one, and you'd like to get the best response of 10,000 each time, and so that increase in reliability will be important.

Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We'll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.

Areas where Altman sees potential are healthcare, education, and especially computer programming. "If you make a programmer three times more effective, it's not just that they can do three times more stuff, it's that they can — at that higher level of abstraction, using more of their brainpower — they can now think of totally different things. It's like, going from punch cards to higher level languages didn't just let us program a little faster — it let us do these qualitatively new things. And we're really seeing that...

"I think it's worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be."

He predicts the fastest technology revolution "by far," worrying about "the speed with which society is going to have to adapt, and that the labor market will change." But soon he adds that "We started investing a little bit in robotics companies. On the physical hardware side, there's finally, for the first time that I've ever seen, really exciting new platforms being built there."

And at some point Altman tells Gates he's optimistic that AI could contribute to helping humans get along with each other.
Programming

A 2024 Discussion Whether To Convert The Linux Kernel From C To Modern C++ (phoronix.com) 139

serviscope_minor shares a Phoronix post: A six year old Linux kernel mailing list discussion has been reignited over the prospects of converting the Linux kernel to supporting modern C++ code. The Linux kernel is predominantly made up of C code with various hand-written Assembly plus the growing work around supporting Rust within the Linux kernel. While it's not clear yet if there's sufficient weight to make it a reality, a Linux kernel mailing list discussion has been restarted over potentially seeing the Linux kernel C code converted to C++ in the future.

Back on 1 April 2018 was a set of 45 patches by Red Hat engineer David Howells to begin converting the kernel to C++. This would allow the mainline kernel to make use of inline template functions, inline overloaded functions, class inheritance, and other features not currently supported by the Linux kernel with its C code. A bit hard to make serious discussions that day and ultimately the patches resided on the Linux kernel mailing list for six years without much discussion.
serviscope_minor adds: It is notable that the current discussion is somewhat different from the infamous discussions in the past.
Programming

Brave Search Can Now Deliver Results For Programming Queries (techcrunch.com) 4

Brave has introduced CodeLLM, an AI-powered tool integrated into its search engine that offers results for programming queries. TechCrunch reports: The new AI-powered CodeLLM provides code snippets with step-by-step explanations and citations. CodeLLM is free and now integrated into Brave Search so users don't have to switch apps to access it. CodeLLM is available to all Brave Search users on desktop and mobile. If Brave Search is your default search engine then all you need to do to access CodeLLM is start a search in your browser's address bar. If Brave Search isn't your default search engine, then you need to head to search.brave.com to conduct your search. "CodeLLM automatically detects programming-related queries, so there's no need to generate a special search," Brave explained in the blog post. "On top of the search results, if an answer is possible there will be a widget to trigger the CodeLLM response. The detection of programming queries happens outside of the LLM, by other search components (similar to the ones able to detect queries about the weather, queries that lend themselves well to be summarized, queries about stock prices, etc)."
Programming

Can AI-Generated Proofs Bring Bug-Free Software One Step Closer? (umass.edu) 61

The University of Massachusetts Amherst has an announcement. A team of computer scientists "recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct." It leverages the AI power of Large Language Models, and the new method, called Baldur, "yields unprecedented efficacy of nearly 66%."

The idea behind the machine-checking technique was "to generate a mathematical proof showing that the code does what it is expected to do," according to the announcement, "and then use a theorem prover to make sure that the proof is also correct. But manually writing these proofs is incredibly time-consuming and requires extensive expertise. "These proofs can be many times longer than the software code itself," says Emily First, the paper's lead author who completed this research as part of her doctoral dissertation at UMass Amherst... First, whose team performed its work at Google, used Minerva, an LLM trained on a large corpus of natural-language text, and then fine-tuned it on 118GB of mathematical scientific papers and webpages containing mathematical expressions. Next, she further fine-tuned the LLM on a language, called Isabelle/HOL, in which the mathematical proofs are written. Baldur then generated an entire proof and worked in tandem with the theorem prover to check its work. When the theorem prover caught an error, it fed the proof, as well as information about the error, back into the LLM, so that it can learn from its mistake and generate a new and hopefully error-free proof.

This process yields a remarkable increase in accuracy. The state-of-the-art tool for automatically generating proofs is called Thor, which can generate proofs 57% of the time. When Baldur (Thor's brother, according to Norse mythology) is paired with Thor, the two can generate proofs 65.7% of the time. Though there is still a large degree of error, Baldur is by far the most effective and efficient way yet devised to verify software correctness, and as the capabilities of AI are increasingly extended and refined, so should Baldur's effectiveness grow.

In addition to First and Brun, the team includes Markus Rabe, who was employed by Google at the time, and Talia Ringer, an assistant professor at the University of Illinois — Urbana Champaign. This work was performed at Google and supported by the Defense Advanced Research Projects Agency and the National Science Foundation.

AI

AI-Assisted Bug Reports Are Seriously Annoying For Developers (theregister.com) 29

Generative AI models like Google Bard and GitHub Copilot are increasingly being used in various industries, but users often overlook their limitations, leading to serious errors and inefficiencies. Daniel Stenberg of curl and libcurl highlights a specific problem of AI-generated security reports: when reports are made to look better and to appear to have a point, it takes a longer time to research and eventually discard it. "Every security report has to have a human spend time to look at it and assess what it means," adds Stenberg. "The better the crap, the longer time and the more energy we have to spend on the report until we close it." The Register reports: The curl project offers a bug bounty to security researchers who find and report legitimate vulnerabilities. According to Stenberg, the program has paid out over $70,000 in rewards to date. Of 415 vulnerability reports received, 64 have been confirmed as security flaws and 77 have been deemed informative -- bugs without obvious security implications. So about 66 percent of the reports have been invalid. The issue for Stenberg is that these reports still need to be investigated and that takes developer time. And while those submitting bug reports have begun using AI tools to accelerate the process of finding supposed bugs and writing up reports, those reviewing bug reports still rely on human review. The result of this asymmetry is more plausible-sounding reports, because chatbot models can produce detailed, readable text without regard to accuracy.

As Stenberg puts it, AI produces better crap. "A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is considered one of the most important areas so it tends to trump almost everything else." As examples, he cites two reports submitted to HackerOne, a vulnerability reporting community. One claimed to describe Curl CVE-2023-38545 prior to actual disclosure. But Stenberg had to post to the forum to make clear that the bug report was bogus. He said that the report, produced with the help of Google Bard, "reeks of typical AI style hallucinations: it mixes and matches facts and details from old security issues, creating and making up something new that has no connection with reality." [...]

Stenberg readily acknowledges that AI assistance can be genuinely helpful. But he argues that having a human in the loop makes the use and outcome of AI tools much better. Even so, he expects the ease and utility of these tools, coupled with the financial incentive of bug bounties, will lead to more shoddy LLM-generated security reports, to the detriment of those on the receiving end.

AMD

AMD Proposes An FPGA Subsystem User-Space Interface For Linux (phoronix.com) 27

Michael Larabel reports via Phoronix: AMD engineers are proposing an FPGA Subsystem User-Space Interface to overcome current limitations of the Linux kernel's FPGA manager subsystem. AMD-Xilinx engineers are proposing a new sysfs interface for the FPGA subsystem that allows for more user-space control over FPGAs. The suggested interface would handle FPGA configuration, driver probe/remove, bridges, Device Tree Overlay file support for re-programming an FPGA while the operating system is running, and other capabilities for user-space not currently presented by the mainline kernel. [...] This proposal from AMD hopes to standardize the FPGA subsystem user-space interface in a manner that is suitable for upstreaming into the mainline Linux kernel.
Education

UCLA Will Transform Dead Westside Mall Into Major Science Innovation Center (latimes.com) 23

An anonymous reader quotes a report from the Los Angeles Times: The former Westside Pavilion, a long shuttered indoor mall, will be transformed into a UCLA biomedical research center aimed at tackling such towering challenges as curing cancer and preventing global pandemics, officials announced Wednesday. The sprawling three-story structure will be known as the UCLA Research Park and will house two multidisciplinary centers focusing on immunology and immunotherapy as well as quantum science and engineering. Establishment of the public-private research center is a coup for Southern California that "will cement California's global, economic, scientific and technical dominance into the 22nd century and beyond," said Gov. Gavin Newsom.

The former owners of the mall, Hudson Pacific Properties Inc. and Macerich, said Wednesday that they sold the property to the Regents of the University of California for $700 million. By purchasing the former shopping center, UCLA saved several years of potential toil to build such a facility on campus. UCLA is the most-applied-to university in the nation, but its Westwood home is among the smallest of the nine UC undergraduate campuses, leaving it limited room for growth. The former mall sits on prime real estate in the heart of the Westside at Pico Boulevard and Overland Avenue, about two miles from the UCLA campus. The mall was owned by commercial developers who spent hundreds of millions of dollars to dramatically remake the old shopping center into an office complex intended to appeal to technology firms, which signed some of the biggest office leases in L.A.'s Silicon Beach before the pandemic.

Google agreed to become the sole tenant and began paying rent last year yet never moved in. The interior is mostly unfinished, but is ready for UCLA to build out to its specifications in a process Newsom said would take about 40 months. The UCLA Research Park "will serve as a state of the art hub of research and innovation that will bring together academics, corporate partners, government agencies and startups to explore new areas of inquiry and achieve breakthroughs that serve the common good," UCLA Chancellor Gene Block said. In addition to flexible work areas, the former mall's 12-screen multiplex movie theater may be converted into lecture halls or performance spaces offering programming across the arts, humanities, sciences and social sciences, the chancellor's office said. One tenant of the research park will be the new California Institute for Immunology and Immunotherapy.

Programming

Niklaus Wirth, Inventor of Pascal, Dies At 89 (twitter.com) 96

New submitter axlash writes: It has been reported on X that Niklaus Wirth, inventor and co-inventor of several languages including Pascal, Euler and Oberon, died on Jan 1, 2024. He was aged 89. "We lost a titan of programming languages, programming methodology, software engineering and hardware design," writes software engineer Bertrand Meyer in a post on X. "Niklaus Wirth passed away on the first of January. We mourn a pioneer, colleague, mentor and friend."

Niklaus Wirth, born on February 15, 1934, in Switzerland, is a renowned computer scientist known for his significant contributions to the field of computer science and software engineering. He is best known for developing several programming languages, including ALGOL W, Pascal, and Modula-2, which have had a profound impact on the design and development of modern computer software. Wirth's work emphasized simplicity, clarity, and efficiency in programming languages, which greatly influenced subsequent language design and the development of structured programming techniques. His legacy also includes the development of the Oberon programming language and the design of the Oberon operating system. Wirth's dedication to elegant and efficient software design continues to inspire computer scientists and software engineers worldwide, making him a highly respected figure in the history of computing.

You can learn more about Wirth via A.M. Turing Award, Britannica, and the Computer History Museum.
Stats

What Were Slashdot's Top 10 Stories of 2023? 22

Slashdot's 10 most-visited stories of 2023 seemed to touch on all the themes of the year, with a story about AI, two about electric cars, two stories about Linux, and two about the Rust programming language.

And at the top of this list, the #1 story of the year drew over 100,000 views...

Interestingly, a story that ran on New Year's Eve of 2022 attracted so much traffic, it would've been the second-most visited story for all of 2023 — if it had run just a few hours later. That story?

Systemd's Growth Over 2022.

Software

Since the Demise of Atom, 'Pulsar' Offers an Alternative Code Editor (pulsar-edit.dev) 24

On December 15 GitHub declared end-of-life for its "hackable text editor" Atom. But Long-time Slashdot reader BrendaEM wants to remind everyone that after the announcement of Atom's sunset, "the community came together to keep Atom alive."

First there was the longstanding fork Atom-Community. But "due to differences in long-term goals for the editor, a new version was born: Pulsar."

From the Pulsar web site: Pulsar [sometimes referred to as Pulsar-Edit] aims to not only reach feature parity with the original Atom, but to bring Pulsar into the 21st century by updating the underlying architecture, and supporting modern features.

With many new features on the roadmap, once Pulsar is stable, it will be a true, Community-Based, Hackable, Text Editor.

"Of course, the user interface is much of the same," writes the blog Its FOSS, and it's cross-platform (supporting Linux, macOS, and Windows).

"The essentials seem to be there with the documentation, packages, and features like the ability to install packages from Git repositories..."
Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
Software

The Beauty of Finished Software (josem.co) 174

Programmer and writer Jose Gilgado, writes about WordStar 4.0, a popular word processor from the early 80s that continues to work reliably well. Famously author George R.R. Martin used the application to write "A Song of Ice and Fire." "It does everything I want a word processing program to do and it doesn't do anything else. I don't want any help. I hate some of these modern systems where you type up a lowercase letter and it becomes a capital. I don't want a capital, if I'd wanted a capital, I would have typed the capital," R.R. Martin said earlier, as we previously covered.

Gilgado argues that WordStar 4.0 embodies the concept of finished software -- a software you can use forever with no unneeded changes. He adds: Sometimes, a software upgrade is a step backward: less usable, less stable, with new bugs. Even if it's genuinely better, there's the learning curve. You were efficient with the old version, but now your most used button is on the other side of the screen under a hidden menu. In a world where constant change is the norm, finished software provides a breath of fresh air. It's a reminder that reliability, consistency, and user satisfaction can coexist in the realm of software development. So the next time you find yourself yearning for the latest update, remember that sometimes, the best software is the one that doesn't change at all.
Youtube

Your Kid Prefers YouTube To Netflix. That's a Problem for Streamers. (wsj.com) 52

Major streaming services test releasing children's content on YouTube and cut back on fare for kids. From a report: Netflix's share of U.S. streaming viewership by 2- to 11-year-olds fell to 21% in September from 25% two years earlier, according to Nielsen. Meanwhile, YouTube's share jumped to 33% from 29.4% over the same period. That reality is changing major streaming services' approach to children's entertainment, from what shows and movies they make to where they release them. Many are pulling back on investments in children's content, and some streamers have started content for young viewers on such platforms as Google-owned YouTube and Roblox.

[...] Netflix has also slimmed down its slate of animated children's originals, opting instead to rely more on third parties such as Skydance Animation, with which it just signed a multiyear deal to do animated films. Now, Netflix is focusing its youth programming resources on bigger swings, such as the animated film "Leo," starring Adam Sandler, its biggest animated debut ever in terms of views. The eight largest U.S. streamers, including Netflix, Warner's Max and Amazon Prime Video, added 53 originals catering to children and families in the first half of the year, down from 135 for the first half of 2022, according to Ampere. That represents a decrease of 61%, compared with a 31% decrease in overall originals across these streamers for the same period.

Programming

Quantum Computing Gets a 'Hard, Cold Reality Check' (ieee.org) 67

A Canadian cybersecurity firm has warned that as soon as 2025, quantum computers could make current encryption methods useless.

But now Slashdot reader christoban shares a "reality check" — an IEEE Spectrum takedown with the tagline "Hype is everywhere, skeptics say, and practical applications are still far away." The quantum computer revolution may be further off and more limited than many have been led to believe. That's the message coming from a small but vocal set of prominent skeptics in and around the emerging quantum computing industry... [T]here's growing pushback against what many see as unrealistic expectations for the technology. Meta's head of AI research Yann LeCun recently made headlines after pouring cold water on the prospect of quantum computers making a meaningful contribution in the near future.

Speaking at a media event celebrating the 10-year anniversary of Meta's Fundamental AI Research team he said the technology is "a fascinating scientific topic," but that he was less convinced of "the possibility of actually fabricating quantum computers that are actually useful." While LeCun is not an expert in quantum computing, leading figures in the field are also sounding a note of caution. Oskar Painter, head of quantum hardware for Amazon Web Services, says there is a "tremendous amount of hype" in the industry at the minute and "it can be difficult to filter the optimistic from the completely unrealistic."

A fundamental challenge for today's quantum computers is that they are very prone to errors. Some have suggested that these so-called "noisy intermediate-scale quantum" (NISQ) processors could still be put to useful work. But Painter says there's growing recognition that this is unlikely and quantum error-correction schemes will be key to achieving practical quantum computers. The leading proposal involves spreading information over many physical qubits to create "logical qubits" that are more robust, but this could require as many as 1,000 physical qubits for each logical one. Some have suggested that quantum error correction could even be fundamentally impossible, though that is not a mainstream view. Either way, realizing these schemes at the scale and speeds required remains a distant goal, Painter says... "I would estimate at least a decade out," he says.

A Microsoft technical fellow believes there's fewer applications where quantum computers can really provide a meaningful advantage, since operating a qubit its magnitudes slower than simply flipping a transistor, which also makes the throughput rate for data thousands or even millions of times slowers.

"We found out over the last 10 years that many things that people have proposed don't work," he says. "And then we found some very simple reasons for that."
Christmas Cheer

30 Years of Donald Knuth's 'Christmas Lectures' Are Online - Including 2023's (thenewstack.io) 29

"It's like visiting an old friend for the holidays," according to this article: Approaching his 86th birthday, Donald Knuth — Stanford's beloved computer science guru — honored what's become a long-standing tradition. He gave a December "Christmas lecture" that's also streamed online for all of his fans...

More than 60 years ago, back in 1962, a 24-year-old Donald Knuth first started writing The Art of Computer Programming — a comprehensive analysis of algorithms which, here in 2023, he's still trying to finish. And 30 years ago Knuth also began making rare live appearances each December in front of audiences of Stanford students...

Recently Stanford uploaded several decades of Knuth's past Christmas lectures, along with a series of 22 videos of Knuth from 1985 titled "the 'Aha' Sessions'" (courses in mathematical problem-solving). There are also two different sets of five videos from 1981 showing Knuth introducing his newly-created typesetting system TeX. There are even 12 videos from 1982 of what Knuth calls "an intensive course about the internal details."

And on Dec. 6, wearing his traditional brown holiday sweater, Knuth gave yet another live demonstration of the beautifully clear precision that's made him famous.

Slashdot Top Deals