Transportation

17-Year-Old Student Builds 3D-printed Drone In Garage, Interests DoD and MIT (yahoo.com) 63

"Cooper Taylor is only 17 years old, but he's already trying to revolutionize the drone industry," writes Business Insider: His design makes the drone more efficient, customizable, and less expensive to construct, he says. He's built six prototypes, 3D printing every piece of hardware, programming the software, and even soldering the control circuit board. He says building his drone cost one-fifth of the price of buying a comparable machine, which sells for several thousand dollars. Taylor told Business Insider he hopes that "if you're a first responder or a researcher or an everyday problem solver, you can have access to this type of drone."

His innovation won him an $8,000 scholarship in April at the Junior Science and Humanities Symposium, funded by the Defense Department. Then, on May 16, he received an even bigger scholarship of $15,000 from the US Navy, which he won after presenting his research at the Regeneron International Science and Engineering Fair...

It all started when Taylor's little sister got a drone, and he was disappointed to see that it could fly for only about 30 minutes before running out of power. He did some research and found that a vertical take-off and landing, or VTOL, drone would last longer. This type of drone combines the multi-rotor helicopter style with the fixed wings of an airplane, making it extremely versatile. It lifts off as a helicopter, then transitions into plane mode. That way, it can fly farther than rotors alone could take it, which was the drawback to Taylor's sister's drone. Unlike a plane-style drone, though, it doesn't need a runway, and it can hover with its helicopter rotors.

Taylor designed a motor "that could start out helicopter-style for liftoff, then tilt back to become an airplane-style motor," according to the article.

And now this summer he'll be "working on a different drone project through a program with the Reliable Autonomous Systems Lab at the Massachusetts Institute of Technology."

Thanks to Slashdot reader Agnapot for sharing the news.
Python

Python Creator Guido van Rossum Asks: Is 'Worse is Better' Still True for Programming Languages? (blogspot.com) 67

In 1989 a computer scientist argued that more functionality in software actually lowers usability and practicality — leading to the counterintuitive proposition that "worse is better". But is that still true?

Python's original creator Guido van Rossum addressed the question last month in a lightning talk at the annual Python Language Summit 2025. Guido started by recounting earlier periods of Python development from 35 years ago, where he used UNIX "almost exclusively" and thus "Python was greatly influenced by UNIX's 'worse is better' philosophy"... "The fact that [Python] wasn't perfect encouraged many people to start contributing. All of the code was straightforward, there were no thoughts of optimization... These early contributors also now had a stake in the language; [Python] was also their baby"...

Guido contrasted early development to how Python is developed now: "features that take years to produce from teams of software developers paid by big tech companies. The static type system requires an academic-level understanding of esoteric type system features." And this isn't just Python the language, "third-party projects like numpy are maintained by folks who are paid full-time to do so.... Now we have a huge community, but very few people, relatively speaking, are contributing meaningfully."

Guido asked whether the expectation for Python contributors going forward would be that "you had to write a perfect PEP or create a perfect prototype that can be turned into production-ready code?" Guido pined for the "old days" where feature development could skip performance or feature-completion to get something into the hands of the community to "start kicking the tires". "Do we have to abandon 'worse is better' as a philosophy and try to make everything as perfect as possible?" Guido thought doing so "would be a shame", but that he "wasn't sure how to change it", acknowledging that core developers wouldn't want to create features and then break users with future releases.

Guido referenced David Hewitt's PyO3 talk about Rust and Python, and that development "was using worse is better," where there is a core feature set that works, and plenty of work to be done and open questions. "That sounds a lot more fun than working on core CPython", Guido paused, "...not that I'd ever personally learn Rust. Maybe I should give it a try after," which garnered laughter from core developers.

"Maybe we should do more of that: allowing contributors in the community to have a stake and care".

AI

Salesforce Blocks AI Rivals From Using Slack Data (theinformation.com) 9

An anonymous reader shares a report: Slack, an instant-messaging service popular with businesses, recently blocked other software firms from searching or storing Slack messages even if their customers permit them to do so, according to a public disclosure from Slack's owner, Salesforce.

The move, which hasn't previously been reported, could hamper fast-growing artificial intelligence startups that have used such access to power their services, such as Glean. Since the Salesforce change, Glean and other applications can no longer index, copy or store the data they access via the Slack application programming interface on a long-term basis, according to the disclosure. Salesforce will continue allowing such firms to temporarily use and store their customers' Slack data, but they must delete the data, the company said.

Python

New Code.org Curriculum Aims To Make Schoolkids Python-Literate and AI-Ready 50

Longtime Slashdot reader theodp writes: The old Code.org curriculum page for middle and high school students has been changed to include a new Python Lab in the tech-backed nonprofit's K-12 offerings. Elsewhere on the site, a Computer Science and AI Foundations curriculum is described that includes units on 'Foundations of AI Programming [in Python]' and 'Insights from Data and AI [aka Data Science].' A more-detailed AI Foundations Syllabus 25-26 document promises a second semester of material is coming soon: "This semester offers an innovative approach to teaching programming by integrating learning with and about artificial intelligence (AI). Using Python as the primary language, students build foundational programming skills while leveraging AI tools to enhance computational thinking and problem-solving. The curriculum also introduces students to the basics of creating AI-powered programs, exploring machine learning, and applying data science principles."

Newly-posted videos on Code.org's YouTube channel appear to be intended to support the new Python-based CS & AI course. "Python is extremely versatile," explains a Walmart data scientist to open the video for Data Science: Using Python. "So, first of all, Python is one of the very few languages that can handle numbers very, very well." A researcher at the Univ. of Washington's Institute for Health Metrics and Evaluation (IHME) adds, "Python is the gold standard and what people expect data scientists to know [...] Key to us being able to handle really big data sets is our use of Python and cluster computing." Adding to the Python love, an IHME data analyst explains, "Python is a great choice for large databases because there's a lot of support for Python libraries."

Code.org is currently recruiting teachers to attend its CS and AI Foundations Professional Learning program this summer, which is being taught by Code.org's national network of university and nonprofit regional partners (teachers who signup have a chance to win $250 in DonorsChoose credits for their classrooms). A flyer for a five-day Michigan Professional Development program to prepare teachers for a pilot of the Code.org CS & A course touts the new curriculum as "an alternative to the AP [Computer Science] pathway" (teachers are offered scholarships covering registration, lodging, meals, and workshop materials).

Interestingly, Code.org's embrace of Python and Data Science comes as the nonprofit changes its mission to 'make CS and AI a core part of K-12 education' and launches a new national campaign with tech leaders to make CS and AI a graduation requirement. Prior to AI changing the education conversation, Code.org in 2021 boasted that it had lined up a consortium of tech giants, politicians, and educators to push its new $15 million Amazon-bankrolled Java AP CS A curriculum into K-12 classrooms. Just three years later, however, Amazon CEO Andy Jassy was boasting to investors that Amazon had turned to AI to automatically do Java coding that he claimed would have otherwise taken human coders 4,500 developer-years to complete.
AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

The Courts

OpenAI Slams Court Order To Save All ChatGPT Logs, Including Deleted Chats (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
Programming

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours (msn.com) 88

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code.

The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase.

The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Programming

How Stack Overflow's Reputation System Led To Its Own Downfall (infoworld.com) 103

A new analysis argues that Stack Overflow's decline began years before AI tools delivered the "final blow" to the once-dominant programming forum. The site's monthly questions dropped from a peak of 200,000 to a steep collapse that began in earnest after ChatGPT's 2023 launch, but usage had been declining since 2014, according to data cited in the InfoWorld analysis.

The platform's remarkable reputation system initially elevated it above competitors by allowing users to earn points and badges for helpful contributions, but that same system eventually became its downfall, the piece argues. As Stack Overflow evolved into a self-governing platform where high-reputation users gained moderation powers, the community transformed from a welcoming space for developer interaction into what the author compares to a "Stanford Prison Experiment" where moderators systematically culled interactions they deemed irrelevant.
Programming

Amid Turmoil, Stack Overflow Asks About AI, Salary, Remote Work in 15th Annual Developer Survey (stackoverflow.blog) 10

Stack Overflow remains in the midst of big changes to counter an AI-fueled drop in engagement. So "We're wondering what kind of online communities Stack Overflow users continue to support in the age of AI," writes their senior analyst, "and whether AI is becoming a closer companion than ever before."

For their 15th year of their annual reader survey, this means "we're not just collecting data; we're reflecting on the last year of questions, answers, hallucinations, job changes, tech stacks, memory allocations, models, systems and agents — together..." Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

Career shifts: We're keen to understand if you've considered a career change or transitioned roles and if AI is impacting your approach to learning or using existing tools. Did we make up the difference in salaries globally for tech workers...?

They're also re-visiting "a key finding from recent surveys highlighted a significant statistic: 80% of developers reported being unhappy or complacent in their jobs." This raised questions about changing office (and return-to-office) culture and the pressures of the industry, along with whether there were any insights into what could help developers feel more satisfied at work. Prior research confirmed that flexibility at work used to contribute more than salary to job satisfaction, but 2024's results show us that remote work is not more impactful than salary when it comes to overall satisfaction... [For some positions job satisfaction stayed consistent regardless of salary, though it increased with salary for other positions. And embedded developers said their happiness increased when they worked with top-quality hardware, while desktop developers cited "contributing to open source" and engineering managers were happier when "driving strategy".]

In 2024, our data showed that many developers experienced a pay cut in various roles and programming specialties. In an industry often seen as highly lucrative, this was a notable shift of around 7% lower salaries across the top ten reporting countries for the same roles. This year, we're interested in whether this trend has continued, reversed, or stabilized. Salary dynamics is an indicator for job satisfaction in recent surveys of Stack Overflow users and understanding trends for these roles can perhaps improve the process for finding the most useful factors contributing to role satisfaction outside of salary.

And of course they're asking about AI — while noting last year's survey uncovered this paradox. "While AI usage is growing (70% in 2023 vs. 76% in 2024 planning to or currently using AI tools), developer sentiment isn't necessarily following suit, as 77% in of all respondents in 2023 are favorable or very favorable of AI tools for development compared to 72% of all respondents in 2024." Concerns about accuracy and misinformation were prevalent among some key groups. More developers learning to code are using or are interested in using AI tools than professional developers (84% vs. 77%)... Developers with 10 — 19 years experience were most likely (84%) to name "increase in productivity" as a benefit of AI tools, higher than developers with less experience (<80%)...

Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

AI

Does Anthropic's Success Prove Businesses are Ready to Adopt AI? (reuters.com) 19

AI company Anthropic (founded in 2021 by a team that left OpenAI) is now making about $3 billion a year in revenue, reports Reuters (citing "two sources familiar with the matter.") The sources said December's projections had been for just $1 billion a year, but it climbed to $2 billion by the end of March (and now to $3 billion) — a spectacular growth rate that one VC says "has never happened." A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and Amazon, is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models.
Anthropic sells AI models as a service to other companies, according to the article, and Reuters calls Anthropic's success "an early validation of generative AI use in the business world" — and a long-awaited indicator that it's growing. (Their rival OpenAI earns more than half its revenue from ChatGPT subscriptions and "is shaping up to be a consumer-oriented company," according to their article, with "a number of enterprises" limiting their rollout of ChatGPT to "experimentation.")

Then again, in February OpenAI's chief operating officer said they had 2 million paying enterprise users, roughly doubling from September, according to CNBC. The latest figures from Reuters...
  • Anthropic's valuation: $61.4 billion.
  • OpenAI's valuation: $300 billion.

AI

Will 'Vibe Coding' Transform Programming? (npr.org) 116

A 21-year-old's startup got a $500,000 investment from Y Combinator — after building their web site and prototype mostly with "vibe coding".

NPR explores vibe coding with Tom Blomfield, a Y Combinator group partner: "It really caught on, this idea that people are no longer checking line by line the code that AI is producing, but just kind of telling it what to do and accepting the responses in a very trusting way," Blomfield said. And so Blomfield, who knows how to code, also tried his hand at vibe coding — both to rejig his blog and to create from scratch a website called Recipe Ninja. It has a library of recipes, and cooks can talk to it, asking the AI-driven site to concoct new recipes for them. "It's probably like 30,000 lines of code. That would have taken me, I don't know, maybe a year to build," he said. "It wasn't overnight, but I probably spent 100 hours on that."

Blomfield said he expects AI coding to radically change the software industry. "Instead of having coding assistance, we're going to have actual AI coders and then an AI project manager, an AI designer and, over time, an AI manager of all of this. And we're going to have swarms of these things," he said. Where people fit into this, he said, "is the question we're all grappling with." In 2021, Blomfield said in a podcast that would-be start-up founders should, first and foremost, learn to code. Today, he's not sure he'd give that advice because he thinks coders and software engineers could eventually be out of a job. "Coders feel like they are tending, kind of, organic gardens by hand," he said. "But we are producing these superhuman agents that are going to be as good as the best coders in the world, like very, very soon."

The article includes an alternate opinion from Adam Resnick, a research manager at tech consultancy IDC. "The vast majority of developers are using AI tools in some way. And what we also see is that a reasonably high percentage of the code output from those tools needs further curation by people, by experienced people."

NPR ends their article by noting that this further curation is "a job that AI can't do, he said. At least not yet."
AI

Stack Overflow's Radical New Plan To Fight AI-Induced Death Spiral (thenewstack.io) 75

DevNull127 writes: Stack Overflow will test paying experts to answer questions. That's one of many radical experiments they're now trying to stave off an AI-induced death spiral. Questions and answers to the site have plummeted more than 90% since April of 2020. So here's what Stack Overflow will try next.

1. They're bringing back Chat, according to their CEO (to foster "even more connections between our community members" in "an increasingly AI-driven world").

2. They're building a "new Stack Overflow" meant to feel like a personalized portal. "It might collect videos, blogs, Q&A, war stories, jokes, educational materials, jobs... and fold them together into one personalized destination."

3. They're proposing areas more open to discussion, described as "more flexible Stack Exchanges... where users can explore ideas or share opinions."

4. They're also licensing Stack Overflow content to AI companies for training their models.

5. Again, they will test paying experts to answer questions.

AI

At Amazon, Some Coders Say Their Jobs Have Begun To Resemble Warehouse Work (nytimes.com) 207

Amazon software engineers are reporting that AI tools are transforming their jobs into something resembling the company's warehouse work, with managers pushing faster output and tighter deadlines while teams shrink in size, according to the New York Times.

Three Amazon engineers told the New York Times that the company has raised productivity goals over the past year and expects developers to use AI assistants that suggest code snippets or generate entire program sections. One engineer said his team was cut roughly in half but still expected to produce the same amount of code by relying on AI tools.

The shift mirrors historical workplace changes during industrialization, the Times argues, where technology didn't eliminate jobs but made them more routine and fast-paced. Engineers describe feeling like "bystanders in their own jobs" as they spend more time reviewing AI-generated code rather than writing it themselves. Tasks that once took weeks now must be completed in days, with less time for meetings and collaborative problem-solving, according to the engineers.
Programming

Is AI Turning Coders Into Bystanders in Their Own Jobs? (msn.com) 101

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work."

And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI.

Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000.

The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control.

"It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job."

"This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..."

"While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."
Programming

Python Can Now Call Code Written in Chris Lattner's Mojo (modular.com) 26

Mojo (the programming language) reached a milestone today.

The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI.

And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi.

One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing...

It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on.

"We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes.

Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.
Java

Java Turns 30 (theregister.com) 100

Richard Speed writes via The Register: It was 30 years ago when the first public release of the Java programming language introduced the world to Write Once, Run Anywhere -- and showed devs something cuddlier than C and C++. Originally called "Oak," Java was designed in the early 1990s by James Gosling at Sun Microsystems. Initially aimed at digital devices, its focus soon shifted to another platform that was pretty new at the time -- the World Wide Web.

The language, which has some similarities to C and C++, usually compiles to a bytecode that can, in theory, run on any Java Virtual Machine (JVM). The intention was to allow programmers to Write Once Run Anywhere (WORA) although subtle differences in JVM implementations meant that dream didn't always play out in reality. This reporter once worked with a witty colleague who described the system as Write Once Test Everywhere, as yet another unexpected wrinkle in a JVM caused their application to behave unpredictably. However, the language soon became wildly popular, rapidly becoming the backbone of many enterprises. [...]

However, the platform's ubiquity has meant that alternatives exist to Oracle Java, and the language's popularity is undiminished by so-called "predatory licensing tactics." Over 30 years, Java has moved from an upstart new language to something enterprises have come to depend on. Yes, it may not have the shiny baubles demanded by the AI applications of today, but it continues to be the foundation for much of today's modern software development. A thriving ecosystem and a vast community of enthusiasts mean that Java remains more than relevant as it heads into its fourth decade.

Microsoft

The Information: Microsoft Engineers Forced To Dig Their Own AI Graves 71

Longtime Slashdot reader theodp writes: In what reads a bit like a Sopranos plot, The Information suggests some of those in the recent batch of terminated Microsoft engineers may have in effect been forced to dig their own AI graves.

The (paywalled) story begins: "Jeff Hulse, a Microsoft vice president who oversees roughly 400 software engineers, told the team in recent months to use the company's artificial intelligence chatbot, powered by OpenAI, to generate half the computer code they write, according to a person who heard the remarks. That would represent an increase from the 20% to 30% of code AI currently produces at the company, and shows how rapidly Microsoft is moving to incorporate such technology. Then on Tuesday, Microsoft laid off more than a dozen engineers on Hulse 's team as part of a broader layoff of 6,000 people across the company that appeared to hit engineers harder than other types of roles, this person said."

The report comes as tech company CEOs have taken to boasting in earnings calls, tech conferences, and public statements that their AI is responsible for an ever-increasing share of the code written at their organizations. Microsoft's recent job cuts hit coders the hardest. So how much credence should one place on CEOs' claims of AI programming productivity gains -- which researchers have struggled to measure for 50+ years -- if engineers are forced to increase their use of AI, boosting the numbers their far-removed-from-programming CEOs are presenting to Wall Street?
Businesses

Tech Job Market Is Shrinking as AI Reshapes Industry Requirements (msn.com) 72

The US tech sector shed 214,000 jobs in April amid continuing economic uncertainty, according to CompTIA analysis of Bureau of Labor Statistics data. Companies are extending hiring timelines to two or three times longer than last year while significantly raising skill requirements, particularly for AI competencies.

"It's the great hesitation," said George Denlinger of Robert Half, noting employers now demand 10-12 skills instead of 6-7 previously. Entry-level programming positions are disappearing as AI assumes those functions, with Janco Associates CEO Victor Janulaitis observing that "a job that has been eliminated from almost all IT departments is an entry-level IT programmer."
AI

Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 261

OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it." "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that the real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.

"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."

Slashdot Top Deals