Python

How Mojo Hopes to Revamp Python for an AI World (acm.org) 28

Python "come with downsides," argues a new article in Communications of the ACM. "Its programs tend to run slowly, and because it is inefficient at running processes in parallel, it is not well suited to some of the latest AI programming."

"Hoping to overcome those difficulties, computer scientist Chris Lattner set out to create a new language, Mojo, which offers the ease of use of Python, but the performance of more complex languages such as C++ or Rust." Lattner tells the site "we don't want to break Python, we want to make Python better," while software architect Doug Meil says Mojo is essentially "Python for AI... and it's going to be way faster in scale across multiple hardware platforms." Lattner teamed up with Tim Davis, whom he had met when they both worked for Google, to form Modular in January 2022. The company, where Lattner is chief executive officer and Davis chief product officer, provides support for companies working on AI and is developing Mojo.

A modern AI programming stack generally has Python on top, Lattner says, but because that is an inefficient language, it has C++ underneath to handle the implementation. The C++ then must communicate with performance accelerators or GPUs, so developers add a platform such as Compute Unified Device Architecture (CUDA) to make efficient use of those GPUs. "Mojo came from the need to unify these three different parts of the stack so that we could build a unified solution that can scale up and down," Lattner says. The result is a language with the same syntax as Python, so people used to programming in Python can adopt it with little difficulty, but which, by some measures, can run up to 35,000 times faster. For AI, Mojo is especially fast at performing the matrix multiplications used in many neural networks because it compiles the multiplication code to run directly on the GPU, bypassing CUDA...

"Increasingly, code is not being written by computer programmers. It's being written by doctors and journalists and chemists and gamers," says Jeremy Howard, an honorary professor of computer science at the University of Queensland, Australia, and a co-founder of fast.ai, a. "All data scientists write code, but very few data scientists would consider themselves professional computer programmers." Mojo attempts to fill that need by being a superset of Python. A program written in Python can be copied into Mojo and will immediately run faster, the company says. The speedup comes from a variety of factors. For instance, Mojo, like other modern languages, enables threads, small tasks that can be run simultaneously, rather than in sequence. Instead of using an interpreter to execute code as Python does, Mojo uses a compiler to turn the code into assembly language.

Mojo also gives developers the option of using static typing, which defines data elements and reduces the number of errors... "Static behavior is good because it leads to performance," Lattner says. "Static behavior is also good because it leads to more correctness and safety guarantees."

Python creator Guido van Rossum "says he is interested to watch how Mojo develops and whether it can hit the lofty goals Lattner is setting for it..." according to the article, " but he emphasizes that the language is in its early stages and, as of July 2023, Mojo had not yet been made available for download."


In June, Lattner did an hour-long interview with the TWIML AI podcast. And in 2017 Chris Lattner answered questions from Slashdot's readers.
The Internet

The World's Oldest Active Torrent Turns 20 Years Old (torrentfreak.com) 33

Twenty years ago, a group of friends shot a Matrix fan film on a limited budget. Sharing their creation with the rest of the word initially appeared to be too expensive, but then they discovered a new technology called BitTorrent. Fast forward two decades and their "Fanimatrix" release is the oldest active torrent that's still widely shared today. Ernesto Van der Sar writes via TorreantFreak: The oldest surviving torrent we have seen is a copy of the Matrix fan film "The Fanimatrix." The torrent was created in September 2003 and will turn 20 years old in a few days. A truly remarkable achievement. The film was shot by a group of New Zealand friends. With a limited budget of just $800, nearly half of which was spent on a leather jacket, they managed to complete the project in nine days. While shooting the film was possible with these financial constraints, finding a distribution channel proved to be a major hurdle. Free video-sharing services didn't exist yet and server bandwidth was still very costly. Technically the team could host their own server, but that would cost thousands of dollars, which wasn't an option. Luckily, however, the group's IT guy, Sebastian Kai Frost, went looking for alternatives.

Frost had a bit part in the film and did some other work as well, but the true breakthrough came when he stumbled upon a new technology called BitTorrent. This appeared to be exactly what they were looking for. "It looked promising because it scaled such that the more popular the file became, the more the bandwidth load was shared. It seemed like the perfect solution," Frost told us earlier. After convincing the crew that BitTorrent was the right choice, Frost created a torrent on September 28, 2003. He also compiled a tracker on his own Linux box and made sure everything was running correctly. Today, more than twenty years have passed and the torrent is still up and running with more than a hundred seeders. As far as we know, it's the oldest active torrent on the Internet, one that deserves to be in the history books.
"I never expected to become the world's oldest torrent but now it's definitely become a thing I'd love to keep carrying on. So I'll be keeping this active as long as I physically can," Frost tells TorrentFreak. "It's really heartening seeing the community pull together around this torrent, despite its usually low transfer count, and work together to keep it alive and kicking. It warms my heart on the daily."

"We're super pumped that it's still going and that people still take an interest in it. Looking forward to the 25th and having something special to share with the world," Frost concludes.
Games

Meet the Guy Preserving the New History of PC Games, One Linux Port At a Time (404media.co) 21

An anonymous reader quotes a report from 404 Media: Historically, video game preservation efforts usually cover two types of games. The most common are very old or "retro" games from the 16-bit era or earlier, which are trapped on cartridges until they're liberated via downloadable ROMs. The other are games that rely on a live service, like Enter the Matrix's now unplugged servers or whatever games you can only get by downloading them via Nintendo's Wii Shop Channel, which shut down in 2019. But time keeps marching on and a more recent era of games now needs to be attended to if we still want those games to be accessible: indies from the late aughts to mid twenty-teens. That's right. Fez, an icon of the era and indie games scene, is now more than a decade old. And while we don't think of this type of work until we need it, Fez, which most PC players booted on Windows 7 when it first came out, is not going to magically run on your Windows 11 machine today without some maintenance.

The person doing that maintenance, as well as making sure that about 70 of the best known indie games from the same era keep running, is Ethan Lee. He's not as well known as Fez's developer Phil Fish, who was also the subject of the documentary Indie Game: The Movie, but this week Lee started publicly marketing the service he's been quietly providing for over 11 years: maintenance of older games. "The way that I've been pitching it is more of like, the boring infrastructure," he said. "Let's make sure the current build works, whereas a lot of times, people feel like the only way to bring a game into a new generation is to do a big remaster. That's cool, but wouldn't have been cool if Quake II just continued to work between 1997 and now without all the weird stuff in between? That's sort of why I've been very particular about the word maintenance, because it's a continuous process that starts pretty much from the moment that you ship it."

As he explains in his pitch to game developers: "the PC catalog alone has grown very large within the last 15 years, and even small independent studios now have an extensive back catalog of titles that players can technically still buy and play today! This does come at a cost, however: The longer a studio exists, the larger their catalog grows, and as a result, the maintenance burden also grows." Just a few of the other indie games Lee ported include Super Hexagon, Proteus, Rogue Legacy, Dust: An Elysian Tail, TowerFall Ascension, VVVVVV, Transistor, Wizorb, Mercenary Kings, Hacknet, Shenzhen I/O, and Bastion. [...] With the PC, people assume that once a game is on Windows, it can live on forever with future versions of Windows. "In reality, what makes a PC so weird is that there's this big stack of stuff. You have an x86 processor, the current-ish era of like modern graphics processors, and then you have the operating system running on top of that and its various drivers," Lee said. A change to any one of those layers can make a game run badly, or not at all.

Programming

Does the New 'Mojo' Programming Language Offer a Faster Superset of Python? (infoworld.com) 71

InfoWorld explores how the new Mojo program language "resembles Python, how it's different, and what it has to offer." The newly unveiled Mojo language is being promoted as the best of multiple worlds: the ease of use and clear syntax of Python, with the speed and memory safety of Rust. Those are bold claims, and since Mojo is still in the very early stages of development, it will be some time before users can see for themselves how the language lives up to them. But Mojo's originator — a company named Modular — has provided early access [through a limited-enrollment preview program] to an online playground: a Jupyter Notebook environment where users can run Mojo code and learn about the language's features and behavior...

Mojo can be described as a "superset" of Python. Programs written in Python are valid Mojo programs, although some Python behaviors haven't yet been implemented... It's also possible to use the actual Python runtime for working with existing Python modules, although there is a performance cost. When Mojo introduces new syntax, it's for system-level programming features, chiefly manual memory handling. In other words, you can write Python code (or something almost exactly like it) for casual use cases, then use Mojo for more advanced, performance-intensive programming scenarios... Mojo's other big difference from Python is that Mojo's not interpreted through a runtime, as Python is. Mojo is compiled ahead-of-time to machine-native code, using the LLVM toolchain. To that end, the best performance comes from using features specific to Mojo. Python features are likely to come at the cost of emulating Python's dynamic behaviors, which are inherently slow — or again, by just using the Python runtime.

Many of Mojo's native language features do one of two things. They're either entirely new features not found in Python at all, or expansions of a Python feature that make it more performant, although with less of Python's dynamism.

For example, Mojo has its own fn keyword which defines a function with explicitly-typed and immutable-by-default arguments, and its own struct keyword which is less like a Python class and more like its C/C++ and Rust counterpart "with fixed layouts determined at compile time but optimized for machine-native speed."

But "At a glance, the code closely resembles Python. Even the new Mojo-specific keywords integrate well with existing Python syntax, so you can run your eye down the code and get a general idea of what's happening." And then there's the speed... The notebook demos also give examples of how Mojo code can be accelerated via parallelism, vectorizing, and "tiling" (increasing cache locality for operations). One of the demos, a 128x128 matrix multiplication demo, yielded a claimed 17-times speedup over Python (using the Python runtime in the Mojo playground) by simply running as-is with no special modification. Mojo added 1866x speedup by adding type annotations, 8500x speedup by adding vectorized operations, and 15000x speedup by adding parallelization.
AI

The Problem with the Matrix Theory of AI-Assisted Human Learning (nytimes.com) 28

In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems "will do more to distract and entertain than to focus." (Since they tend to "hallucinate" inaccuracies, and may first be relegated to areas "where reliability isn't a concern" like videogames, song mash-ups, children's shows, and "bespoke" images.)

"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?

You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."

But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...

The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.

We failed that test with the internet. Let's not fail it with A.I.

AI

Ask Slashdot: Why Should I Be Afraid of Artificial Intelligence? 275

"I keep reading and hearing about calls for regulations on artificial intelligence," writes long-time Slashdot reader bartoku , "and it pisses me off."

"I want more so called artificial intelligence, not less, and I do not want it to be regulated, filtered, restricted in anyway." I love that Deep Fakes are now available to the masses, and I stopped believing anything is real in 1997 after Hoffman and De Niro scared me in " Wag the Dog".

I love automation and I want more of it; robots please take my job. I want robots to go fight wars for me instead of our sons.

Surveillance is already terrifying, adding "Artificial Intelligence" does not really make it that much more scary; we all need to just starve the system of our personal data anyway. All the other arguments like crashing economic systems and discrimination just seemed to be based on stupid "Artificial Intelligence" hooked up to something it should not be...

Please scare me, or vote on your favorite sci-fi "Artificial Intelligence" scenario. I will be being boring and hope we can have a "good" Matrix; one where I am rich and sexy.

The original submission notes that they posed this question to ChatGPT — and to Google — but "I did not get a single compelling answer."

So share your own thoughts in the comments: why should this Slashdot user be afraid of AI?

NOTE: Though they didn't feel it conveyed the right tone, they also submitted their original post to Microsoft's Bing AI, which delivered this rewrite:

What are the real dangers of artificial intelligence? I am not convinced by the common arguments against it, such as regulation, deep fakes, automation, war, surveillance, economic disruption, or discrimination. I think these are either exaggerated or solvable problems. I actually want more artificial intelligence in my life, not less. Can you give me some compelling reasons why I should be afraid of artificial intelligence? Or what are some sci-fi scenarios that you find plausible or interesting? Personally, I would like a Matrix-like simulation where I can live out my fantasies.
Businesses

Apple Is Bigger Than Almost Any Stock Market In the World (cnbc.com) 79

"My friend Ben Carlson pointed out that Apple's current market capitalization of about $2.7 trillion this week exceeds the entire market capitalization of the United Kingdom, the third biggest stock market in the world," writes CNBC's Bob Pisani. From the report: Dimensional's Matrix Book is an annual review of global returns that highlight the power of compound investing. It's a fascinating document: you can look up the compounded growth rate of the S&P 500 for every year going back to 1926. Buried on page 74 is a chapter on "World Equity Market Capitalization," listing the market capitalization of most of the world, country by country. No surprise, the U.S. is the global leader in stock market value. The $40 trillion in stock market wealth in the U.S. is almost 60% of the value of all the equities in the world.

Here's where it gets fun. [...] Not only is Apple bigger than all 595 companies that list in the United Kingdom, it's bigger than all the companies in France (235 companies), and India (1,242 companies). Apple is twice the size of Germany's entire stock market, with 255 companies.

Open Source

Linux Kernel 6.3 Released (zdnet.com) 16

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: The latest Linux kernel is out with a slew of new features -- and, for once, this release has been nice and easy. [...] Speaking of Rust, everyone's favorite memory-safe language, the new kernel comes with user-mode Linux support for Rust code. Miguel Ojeda, the Linux kernel developer, who's led the efforts to bring Rust to Linux, said the additions mean we're, "getting closer to a point where the first Rust modules can be upstreamed."

Other features in the Linux 6.3 kernel include support and enablement for upcoming and yet-to-be-released Intel and AMD CPUs and graphics hardware. While these updates will primarily benefit future hardware, several changes in this release directly impact today's users' day-to-day experience. The kernel now supports AMD's automatic Indirect Branch Restricted Speculation (IBRS) feature for Spectre mitigation, providing a less performance-intensive alternative to the retpoline speculative execution.

Linux 6.3 also includes new power management drivers for ARM and RISC-V architectures. RISC-V has gained support for accelerated string functions via the Zbb bit manipulation extension, while ARM received support for scalable matrix extension 2 instructions. For filesystems, Linux 6.3 brings AES-SHA2-based encryption support for NFS, optimizations for EXT4 direct I/O performance, low-latency decompression for EROFS, and a faster Brtfs file-system driver. Bottom line: many file operations will be a bit more secure and faster.

For gamers, the new kernel provides a native Steam Deck controller interface in HID. It also includes compatibility for the Logitech G923 Xbox edition racing wheel and improvements to the 8BitDo Pro 2 wired game controllers. Who says you can't game on Linux? Single-board computers, such as BannaPi R3, BPI-M2 Pro, and Orange Pi R1 Plus, also benefit from updated drivers in this release. There's also support for more Wi-Fi adapters and chipsets. These include: Realtek RTL8188EU Wi-Fi adapter support; Qualcomm Wi-Fi 7 wireless chipset support; and Ethernet support for NVIDIA BlueField 3 DPU. For users dealing with complex networks that have both old-school and modern networks, the new kernel can also handle multi-path TCP handling mixed flows with IPv4 and IPv6.
Linux 6.3 is available from kernel.org. You can learn how to compile the Linux kernel yourself here.
Education

American IQ Scores Have Rapidly Dropped, Proving the 'Reverse Flynn Effect' (popularmechanics.com) 391

An anonymous reader quotes a report from Popular Mechanics: Americans' IQ scores are trending in a downward direction. In fact, they've been falling for over a decade. According to a press release, in studying intelligence testing data from 2006 to 2018, Northwestern University researchers noticed that test scores in three out of four "cognitive domains" were going down. This is the first time we've seen a consistent negative slope for these testing categories, providing tangible evidence of what is known as the "Reverse Flynn Effect."

In a 1984 study, James Flynn noticed that intelligence test scores had steadily increased since the early 1930s. We call that steady rise the Flynn Effect. Considering that overall intelligence seemed to be increasing faster than could be explained by evolution, the reason increase became a source of debate, with many attributing the change to various environmental factors. But now, it seems that a Reverse Flynn Effect is, well, in effect.

The study, published in the journal Intelligence, used an online, survey-style personality test called the Synthetic Aperture Personality Assessment Project to analyze nearly 400,000 Americans. The researchers recorded responses from 2006 and 2018, in order to examine if and how cognitive ability scores were changing over time within the country. The data showed drops in logic and vocabulary (known as verbal reasoning), visual problem solving and analogies (known as matrix reasoning), and computational and mathematical abilities (known as letter and number series).
Not every domain is going down though, notes the report. "[S]cores in spatial reasoning (known as 3D rotation) followed the opposite pattern, trending upward over the 12-year period."

"If all the scores were going in the same direction, you could make a nice little narrative about it, but that's not the case," says Elizabeth Dworak, a research assistant professor at Northwestern University and one of the authors on the study. "We need to do more to dig into it." She adds: "It doesn't mean their mental ability is lower or higher; it's just a difference in scores that are favoring older or newer samples. It could just be that they're getting worse at taking tests or specifically worse at taking these kinds of tests."
AI

AI-Generated Viral Videos are Already Here (newyorker.com) 23

AI now "automates creative impulses," writes New Yorker staff writer Kyle Chayka — then wonders where that will lead. Chayka's first example is a Berlin-based photographer using AI tools to create a viral video showing Harry Potter characters as fashion models for the upscale French label Balenciaga: A.I. tools were involved in each step of Alexander Niklass's process, and in each element of the video. He created the basic static images with Midjourney, evoking the Harry Potter actors and outfits through text prompts such as "male model, grotesque, balenciaga commercial." Then he used ElevenLabs — a "voice-cloning" tool — to create models of the actors' voices based on previously recorded audio. Finally, he fed the images into a service called D-ID, which is used to make "avatar videos" — subtly animated portraits, not so far off from those that appear in the newspapers of the Potter world. D-ID added the signature lip synchs and head nods, which Niklass explained were a reference to fashion models tilting their chins for the cameras.

The combination of child-friendly film and adult luxury fashion held no particular symbolism nor expressed an artistic intent. It's "entertainment," Niklass said. Yet the video's most compelling aspect might be its vacuity, a meaningless collision of cultural symbols. The nonsense is the point.

The article also cites a song where the French group AllttA performs with an AI-generated simulation of Jay-Z. Chayka marvels at a world where "The A.I. content has the appearance of realism, without actual reality — reality solely as a style.... it seems that a Rubicon has been crossed: It doesn't matter that these artifacts are generated by A.I.; we can just enjoy them for what they are. It happened faster than I thought possible, but now that A.I.-generated pop culture has entered the mainstream, it seems unlikely that we'll ever get rid of it."

Chayka asked ChatGPT how AI-generated imagery is changing our perceptions, and "It responded that there has been a 'blurring of the lines between real and artificial.'"

The article ultimately ponders the possible implications of "a world in which every style, every idea, and every possible remix is generated as fast and frictionlessly as possible, and the successful ones stick and get attention." But at the same time, Chayka believes the final output's quality still depends on the humans involved (arguing that the Harry Potter fashion video was still more "appealingly odd" than later AI-generated videos copying the idea, like "Matrix by Gucci," "Star Wars by Balenciaga," and "The Office by Balenciaga".) A.I. tools may have been able to replicate actors' faces and generate fashionable outfits, but only Niklass could have come up with the concept, which required keen observation of both high fashion and the wizarding world — and also a very specific, extremely online sense of humor. With tools like Midjourney publicly available to anyone online, "everybody can create something visually appealing now," he said. "But A.I. can't generate taste yet," he continued....

To put it another way, execution may have been democratized by generative A.I., but ideas have not. The human is still the originator, editor, and curator of A.I.'s effects.

United States

US-Backed VCs Are Funding China's Answer To OpenAI (theinformation.com) 40

A boom in artificial intelligence startup funding sparked by OpenAI has spilled over to China, the world's second-biggest venture capital market. Now American institutional investors are indirectly financing a rash of Chinese AI startups aspiring to be China's answer to OpenAI. From a report: The American investors, including U.S. endowments, back key Chinese VC firms such as Sequoia Capital China, Matrix Partners China, Qiming Venture Partners and Hillhouse Capital Management that are striking local AI startup deals, which haven't been previously reported. U.S. government officials have grown increasingly wary of such investments in Chinese AI as well as semiconductors because they could aid a geopolitical rival. For instance, Sequoia China, the Chinese affiliate of the Silicon Valley VC stalwart, recently made a U.S.-dollar investment in a brand-new AI venture created by Yang Zhilin, a young assistant professor at Beijing's prestigious Tsinghua University, which is sometimes described as China's equivalent of the Massachusetts Institute of Technology, according to a person with direct knowledge of the deal. Yang, who got his doctorate from the School of Computer Science, Carnegie Mellon University, in 2019, is considered one of China's top AI researchers. He previously co-founded another startup Sequoia China backed, Recurrent AI, which develops tools for salespeople, according to the company's website. Matrix and Qiming, meanwhile, recently funded another Beijing-based AI startup, Frontis, which has compared its product to ChatGPT. It was founded in 2021 by Zhou Bowen, a Tsinghua professor who once led JD.com's AI research lab, according to the company's website. The deal gave the startup a paper valuation of hundreds of millions of U.S. dollars, the company said.
Twitter

Jack Dorsey Says He Will Give $1 Million Per Year To Signal App 73

Twitter co-founder Jack Dorsey said in a blog post on Tuesday that he will give a grant of $1 million per year to encrypted messaging app Signal, the first in a series of grants he plans to make to support "open internet development." Reuters reports: Social media should not be "owned by a single company or group of companies," and needs to be "resilient to corporate and government influence," Dorsey wrote in a post on Revue, a newsletter service owned by Twitter. [Editor's note: The post has been moved to Pastebin since Revue is shutting down early next year.] TechCrunch adds: Dorsey said that his hope to build a Twitter according to his wishes died in 2020 with the entrance of an unnamed activist investor. "I planned my exit at that moment knowing I was no longer right for the company," he wrote. The principles he had hoped to build on -- resilience to corporate and government control, user-controlled content with no exceptions and algorithmic moderation -- are not present in today's Twitter, nor in the one he led, he admitted. Even so, he wrote that, contrary to the insinuations accompanying the so-called Twitter Files, "there was no ill intent or hidden agendas, and everyone acted according to the best information we had at the time."

As to actual solutions, Dorsey is of course hard at work (or at least present) at Bluesky, but he calls out Mastodon and Matrix as other worthwhile avenues for development: "There will be many more. One will have a chance at becoming a standard like HTTP or SMTP. This isn't about a 'decentralized Twitter.' This is a focused and urgent push for a foundational core technology standard to make social media a native part of the internet."
Programming

How GitHub Copilot Could Steer Microsoft Into a Copyright Storm (theregister.com) 83

An anonymous reader quotes a report from the Register: GitHub Copilot -- a programming auto-suggestion tool trained from public source code on the internet -- has been caught generating what appears to be copyrighted code, prompting an attorney to look into a possible copyright infringement claim. On Monday, Matthew Butterick, a lawyer, designer, and developer, announced he is working with Joseph Saveri Law Firm to investigate the possibility of filing a copyright claim against GitHub. There are two potential lines of attack here: is GitHub improperly training Copilot on open source code, and is the tool improperly emitting other people's copyrighted work -- pulled from the training data -- to suggest code snippets to users?

Butterick has been critical of Copilot since its launch. In June he published a blog post arguing that "any code generated by Copilot may contain lurking license or IP violations," and thus should be avoided. That same month, Denver Gingerich and Bradley Kuhn of the Software Freedom Conservancy (SFC) said their organization would stop using GitHub, largely as a result of Microsoft and GitHub releasing Copilot without addressing concerns about how the machine-learning model dealt with different open source licensing requirements.

Copilot's capacity to copy code verbatim, or nearly so, surfaced last week when Tim Davis, a professor of computer science and engineering at Texas A&M University, found that Copilot, when prompted, would reproduce his copyrighted sparse matrix transposition code. Asked to comment, Davis said he would prefer to wait until he has heard back from GitHub and its parent Microsoft about his concerns. In an email to The Register, Butterick indicated there's been a strong response to news of his investigation. "Clearly, many developers have been worried about what Copilot means for open source," he wrote. "We're hearing lots of stories. Our experience with Copilot has been similar to what others have found -- that it's not difficult to induce Copilot to emit verbatim code from identifiable open source repositories. As we expand our investigation, we expect to see more examples. "But keep in mind that verbatim copying is just one of many issues presented by Copilot. For instance, a software author's copyright in their code can be violated without verbatim copying. Also, most open-source code is covered by a license, which imposes additional legal requirements. Has Copilot met these requirements? We're looking at all these issues."
GitHub's documentation for Copilot warns that the output may contain "undesirable patterns" and puts the onus of intellectual property infringement on the user of Copilot, notes the report.

Bradley Kuhn of the Software Freedom Conservancy is less willing to set aside how Copilot deals with software licenses. "What Microsoft's GitHub has done in this process is absolutely unconscionable," he said. "Without discussion, consent, or engagement with the FOSS community, they have declared that they know better than the courts and our laws about what is or is not permissible under a FOSS license. They have completely ignored the attribution clauses of all FOSS licenses, and, more importantly, the more freedom-protecting requirements of copyleft licenses."

Brett Becker, assistant professor at University College Dublin in Ireland, told The Register in an email, "AI-assisted programming tools are not going to go away and will continue to evolve. Where these tools fit into the current landscape of programming practices, law, and community norms is only just beginning to be explored and will also continue to evolve." He added: "An interesting question is: what will emerge as the main drivers of this evolution? Will these tools fundamentally alter future practices, law, and community norms -- or will our practices, law and community norms prove resilient and drive the evolution of these tools?"
Math

DeepMind Breaks 50-Year Math Record Using AI; New Record Falls a Week Later (arstechnica.com) 30

Last week, DeepMind announced it discovered a more efficient way to perform matrix multiplication, conquering a 50-year-old record. This week, two Austrian researchers at Johannes Kepler University Linz claim they have bested that new record by one step. Ars Technica reports: In 1969, a German mathematician named Volker Strassen discovered the previous-best algorithm for multiplying 4x4 matrices, which reduces the number of steps necessary to perform a matrix calculation. For example, multiplying two 4x4 matrices together using a traditional schoolroom method would take 64 multiplications, while Strassen's algorithm can perform the same feat in 49 multiplications. Using a neural network called AlphaTensor, DeepMind discovered a way to reduce that count to 47 multiplications, and its researchers published a paper about the achievement in Nature last week.

To discover more efficient matrix math algorithms, DeepMind set up the problem like a single-player game. The company wrote about the process in more detail in a blog post last week. DeepMind then trained AlphaTensor using reinforcement learning to play this fictional math game -- similar to how AlphaGo learned to play Go -- and it gradually improved over time. Eventually, it rediscovered Strassen's work and those of other human mathematicians, then it surpassed them, according to DeepMind. In a more complicated example, AlphaTensor discovered a new way to perform 5x5 matrix multiplication in 96 steps (versus 98 for the older method).

This week, Manuel Kauers and Jakob Moosbauer of Johannes Kepler University in Linz, Austria, published a paper claiming they have reduced that count by one, down to 95 multiplications. It's no coincidence that this apparently record-breaking new algorithm came so quickly because it built off of DeepMind's work. In their paper, Kauers and Moosbauer write, "This solution was obtained from the scheme of [DeepMind's researchers] by applying a sequence of transformations leading to a scheme from which one multiplication could be eliminated."

AI

DeepMind's Game-Playing AI Has Beaten a 50-Year-Old Record In Computer Science (technologyreview.com) 91

An anonymous reader quotes a report from MIT Technology Review: DeepMind has used its board-game playing AI AlphaZero to discover a faster way to solve a fundamental math problem in computer science, beating a record that has stood for more than 50 years. A year after it took biologists by surprise, AlphaFold has changed how researchers work and set DeepMind on a new course. The problem, matrix multiplication, is a crucial type of calculation at the heart of many different applications, from displaying images on a screen to simulating complex physics. It is also fundamental to machine learning itself. Speeding up this calculation could have a big impact on thousands of everyday computer tasks, cutting costs and saving energy.

Despite the calculation's ubiquity, it is still not well understood. A matrix is simply a grid of numbers, representing anything you want. Multiplying two matrices together typically involves multiplying the rows of one with the columns of the other. The basic technique for solving the problem is taught in high school. But things get complicated when you try to find a faster method. This is because there are more ways to multiply two matrices together than there are atoms in the universe (10 to the power of 33, for some of the cases the researchers looked at).

The trick was to turn the problem into a kind of three-dimensional board game, called TensorGame. The board represents the multiplication problem to be solved, and each move represents the next step in solving that problem. The series of moves made in a game therefore represents an algorithm. The researchers trained a new version of AlphaZero, called AlphaTensor, to play this game. Instead of learning the best series of moves to make in Go or chess, AlphaTensor learned the best series of steps to make when multiplying matrices. It was rewarded for winning the game in as few moves as possible. [...] The researchers describe their work in a paper published in Nature today. The headline result is that AlphaTensor discovered a way to multiply together two four-by-four matrices that is faster than a method devised in 1969 by the German mathematician Volker Strassen, which nobody had been able to improve on since. The basic high school method takes 64 steps; Strassen's takes 49 steps. AlphaTensor found a way to do it in 47 steps.
"Overall, AlphaTensor beat the best existing algorithms for more than 70 different sizes of matrix," concludes the report. "It reduced the number of steps needed to multiply two nine-by-nine matrices from 511 to 498, and the number required for multiplying two 11-by-11 matrices from 919 to 896. In many other cases, AlphaTensor rediscovered the best existing algorithm."
Operating Systems

Can a Fork Save Cutefish OS (or Its Desktop)? (debugpoint.com) 109

In April ZDNet called its beta "the cutest Linux distro you'll ever use," praising the polished "incredible elegance" of Debian-based Cutefish OS, with its uncluttered, MacOS-like "Cutefish DE" desktop.

But now CutefishOS.com times out, with at least one Reddit user complaining "their email is not responding" and seeking contributors for a fork.

But meanwhile, the technology site DebugPoint.com shares another update: It looks like the OpenMandriva project is already continuing with the development of the Cutefish DE (not the OS) for its own OS. For more details, visit the Matrix discussion page.

Besides, it's worth mentioning that Arch Linux already have the Cutefish desktop packages in the community repo. You can even install it as a standalone desktop environment in Arch Linux with easy steps. As you can see, it is easier to maintain the desktop environment to continue its development because the structure is already out there.

I have tested and reviewed hundreds of distros for years, and Cutefish OS is the promising one with its stunning desktop environment. It was written from the ground up with QML and C++ and took advantage of KWin. It would have been an attractive desktop as a separate component and could have been another great option besides KDE Plasma or GNOME.

Many open-source projects are born and die every year, and it's unfortunate to see the situation of Cutefish OS. I hope an official fork comes up soon, and we all can contribute to it.

Software

Thunderbird 102 Released (thunderbird.net) 35

slack_justyb writes: Thunderbird 102 has been released with some new UI improvements and new features. There has been a change in the icons, the layout of the address book has been upgraded to feature a more modern UI, and a new UI feature known as the spaces toolbar to get around Thunderbird. New features include an updated import and export wizard, a UI for editing the email header settings, and Matrix client support within Thunderbird, which is a messaging system using HTTPS that is similar to Discord if you've used that.

Finally, the Thunderbird Twitter account released the first screenshot of the new UI that is being targeted for the 114 release. For those wondering what the Thunderbird team has done and is doing, you can always head over to the planning section of the developer site. The roadmap are things they're working on the current release and the backlog are the things they are working towards.

Displays

Samsung To Close LCD Business (koreatimes.co.kr) 44

An anonymous reader quotes a report from The Korea Times: Samsung Display has decided to close its liquid-crystal display (LCD) business in June, hobbled by a declining global competitive edge due to cheaper products made by its Chinese and Taiwanese counterparts, according to the industry, Sunday. No investment plan details have since been announced. The decision by the display affiliate of Samsung Group came six months sooner than expected, due in large part to rapid losses from falling LCD prices. According to Display Supply Chain Consultants (DSCC), a U.S. market research firm, the average price index of LCD panels, measured against 100 in January 2014, will fall to 36.6 in September of this year. The figure has dropped farther from the record low of 41.5 in April of this year, and 58 percent lower than the record high of 87 in June 2021.

Samsung Display will no longer produce LCDs used for large TV screens and focus instead on manufacturing organic light-emitting diode (OLED) and quantum dot (QD) displays. The employees of the LCD businesses are expected to be transferred to the QD businesses. The display affiliate was first formed in 1991, as an LCD business arm under Samsung Electronics. It formally launched in 2012 as Samsung Display and has since merged with three local and Japanese makers of active matrix organic light-emitting diodes (AMOLED), for the production of advanced types of displays.

Movies

As Far as China Is Concerned, Keanu Reeves No Longer Exists (msn.com) 149

"It's no longer possible to watch any content starring Keanu Reeves in China," reports PC Magazine, "and searching for his name returns no results from search engines."

The AV Club explains: Earlier this year, about a month after the release of The Matrix Resurrections, Reeves was announced as a performer at the 35th annual Tibet House Benefit Concert. The concert was organized by Tibet House, a nonprofit founded by supporters of the Dalai Lama that Chinese authorities have labeled "a separatist organization advocating for Tibetan independence," according to The Hollywood Reporter....

Now, after his appearance at the show, it's being reported by the Los Angeles Times that the Matrix movies, Speed, Bill & Ted's Excellent Adventure, The Lake House, and more films from the actor's catalog can no longer be streamed on platforms such as Tencent Video, Youku, and Migu Video.... The one Reeves picture that is still up and available to stream in the country is Toy Story 4 — but that's because the film's credits feature the dubbing cast, not the original cast from the American release.

But it's more than that, notes PC Magazine: As Reuters reports, the Chinese authorities have seemingly wiped the actor's existence from servers across the country.... And with the internet being so restricted and controlled there, it's relatively simple for those in power to digitally disappear someone. So far, Tencent and iQiyi have removed at least 19 of the actor's movies from their streaming platforms, and performing a search for either his English name or its Chinese translation will return zero results from search engines, apparently.
The Los Angeles Times supplies some context: The development emerged just after his latest film "The Matrix: Resurrections" became the first blockbuster to hit Chinese theaters in over two months, ending an unusually prolonged drought of censorship approvals on U.S. titles in a year of rising geopolitical tensions and a further cooling of relations with Hollywood.... "It's a curious case that's worth following. We tend to think of the censorship machine in China as this really coordinated monster, but the fact that we're seeing these conflicting signals [between the online and theatrical markets] suggests that some of these measures come from different places," said Alex Yu, a researcher at China Digital Times, a U.S.-based news organization that translates and archives content censored in China.

It's unclear who ordered the deletions, China's regulatory agencies or platforms acting proactively to remove potentially troublesome content, Yu said.... "Why all of a sudden did they decide to take this measure at this exact moment? It's a question we as outsiders might never be able to answer," Yu said. "The system is so opaque that it's pretty much impossible to pinpoint which agency or person is responsible...."

The ban on Reeves' past works bodes poorly for the China prospects of his upcoming projects. These include animation "DC League of Super-Pets," starring Chinese fan favorite Dwayne Johnson, and the pandemic-delayed sequel "John Wick: Chapter 4," which appears to target mainland viewers with its top billing of Donnie Yen, the Hong Kong action star known for his expressions of loyalty to China's ruling Communist Party....

Despite the original trilogy's popularity, "The Matrix: Resurrections" was a flop in China even before it faced nationalist backlash, grossing only $13.6 million and notching just 5.7 out of 10 on the taste-making ratings platform Douban.

United States

The US Will Finally Allow Adaptive Beam Headlights on New Cars (arstechnica.com) 177

The National Highway Traffic Safety Administration is finally poised to legalize adaptive beam headlights in the US. From a report: The NHTSA announced that it has issued a final rule that will update the Federal Motor Vehicle Safety Standards, which currently only allow for "dumb" high- and low-beam lights. Adaptive beam lights use a matrix of projectors, some of which can be turned off to shape the beam so the lights illuminate the road but don't shine at an oncoming driver. (These are an advancement over the auto-high beam technology that you may have fitted to your current car.) The technology has been around for nearly two decades in Europe and Japan.

Slashdot Top Deals