Programming

Is It Time to Stop Saying 'Learn to Code'? (vox.com) 147

Long-time Slashdot reader theodp writes: According to Google Trends, peak "Lean to Code" occurred in early 2019 when laid-off Buzzfeed and Huffpost journalists were taunted with the phrase on Twitter... As Meta founder and CEO Mark Zuckerberg recently put it, "We're in a different world." Indeed. Encouraging kids to pursue CS careers in Code.org's viral 2013 launch video, Zuckerberg explained, "Our policy at Facebook is literally to hire as many talented engineers as we can find."

In Learning to Code Isn't Enough, a new MIT Technology Review article, Joy Lisi Rankin reports on the long history of learn-to-code efforts, which date back to the 1960s. "Then as now," Lisi Rankin writes, "just learning to code is neither a pathway to a stable financial future for people from economically precarious backgrounds nor a panacea for the inadequacies of the educational system."

But is that really true? Vox does note that the latest round of layoffs at Meta "is impacting workers in core technical roles like data scientists and software engineers — positions once thought to be beyond reproach." Yet while that's also true at other companies, those laid-off tech workers also seem to be finding similar positions by working in other industries: Software engineers were the most overrepresented position in layoffs in 2023, relative to their employment, according to data requested by Vox from workforce data company Revelio Labs. Last year, when major tech layoffs first began, recruiters and customer success specialists experienced the most outsize impact. So far this year, nearly 20 percent of the 170,000 tech company layoffs were software engineers, even though they made up roughly 14 percent of employees at these companies. "Early layoffs were dominated by recruiters, which is forgoing future hiring," Revelio senior economist Reyhan Ayas told Vox. "Whereas in 2023 we see a shift toward more core engineering and software engineering, which signals a change in focus of current business priorities."

In other words, tech companies aren't just trimming the fat by firing people who fill out their extensive ecosystem, which ranges from marketers to massage therapists. They're also, many for the first time, making cuts to the people who build the very products they're known for, and who enjoyed a sort of revered status since they, like the founders of the companies, were coders. Software engineers are still important, but they don't have the power they used to...

The latest monthly jobs report by tech industry association CompTIA found that even though employment at tech companies (which includes all roles at those companies) declined slightly in March, employment in technical occupations across industry sectors increased by nearly 200,000 positions. So even if tech companies are laying off tech workers, other industries are snatching them up. Unfortunately for software engineers and the like, that means they might also have to follow those industries' pay schemes. The average software engineer base pay in the US is $90,000, according to PayScale, but can be substantially higher at tech firms like Facebook, where such workers also get bonuses and stock options.

Programming

New Version of Rust Speeds Compilation With Less Debugging Info By Default (phoronix.com) 24

The Rust team released a new version Thursday — Rust 1.69.0 — boasting over over 3,000 new commits from over 500 contributors.

Phoronix highlights two new improvements: In order to speed-up compilation speeds, Rust 1.69 and moving forward debug information is no longer included in build scripts by default. Cargo will avoid emitting debug information in build scripts by default — leading to less informative backtraces in build scripts when problems arise, but faster build speeds by default. Those wanting the debug information emitted can now set the debug flag in their Cargo.toml configuration.

The Cargo build shipped by Rust 1.69 is also now capable of suggesting fixes automatically for some of the generated warnings. Cargo will also suggest using "cargo fix" / "cargo clippy --fix" when it knows the errors can be automatically fixed.

Programming

Rust Foundation Apologizes For Proposed Trademark Changes, Promises Improvement (theregister.com) 37

"The Rust Foundation on Monday apologized for confusion caused by the organization's proposed trademark policy changes," reports the Register.

The Foundation now says their proposed policy "clearly needs improvement" and "there are many valid critiques of the initial draft," promising to address them and adopt a more transparent process (with a report summarizing the feedback soon). From the Register's report: The foundation, which provides financial and legal support for the memory-safe programming language, had proposed fresh rules on the use of the word Rust and its logo, which included the recommendation that people not use 'Rust' in their Rust crate names, eg: vulture-rs would be preferred over vulture-rust. These draft changes triggered a backlash... Over the weekend, Rust creator Graydon Hoare voiced support for the community's objections in a Reddit discussion thread, in response to a post by programmer Andrew Gallant, a former member of the Rust moderation team, who argued the new policy was not all that different from the old one.

"Open them up side by side — old and new — and look at what they each say about, specifically, package names, project names, repos or websites using the word 'rust', or modified versions of the logo used for small groups or projects," wrote Hoare. "These are specifically the things people are upset about, because they all changed from 'acceptable' to 'prohibited' when 'clarifying' the policy. And those are specifically things that everyone in the community does, and has done, for years. There are zillions of packages, projects, repos, websites and groups using the names and logo this way, as the old policy said they could. The new policy tells them all to stop."

Long-time open source advocate Bruce Perens told the Register that Rust's trademark policy "goes far awry of fair use which is legally permitted. Books on Rust will always have its name in their title, commercial products will be advertised as being written in Rust, being compatible with Rust, or compiling Rust. But the policy attempts to deny permission for these things. A proper trademark policy prevents others from representing that their product is Rust or is endorsed by the trademark holder of Rust. That's really as much as you can ever enforce, so there's no sense in a policy that asks for more."

The Register also spoke to Ashley Williams, a former member of the Rust core team and the original executive director and founder of the Rust Foundation, who argued upheaval in Rust's governance over the past year led to a team with less experience dealing with the Rust community. "I think a couple of very passionate people participated in the trademark working group and they didn't involve a lot of people who have even basic experience interacting with the community. So really classic community behaviors ended up getting prohibited in that [draft] policy. And that's really why everybody got upset. The policy ultimately said, 'a thing that you do all the time as a way of contributing to the Rust community is now against our policy.'"
AI

ChatGPT Creates Mostly Insecure Code, But Won't Tell You Unless You Ask 80

ChatGPT, OpenAI's large language model for chatbots, not only produces mostly insecure code but also fails to alert users to its inadequacies despite being capable of pointing out its shortcomings. The Register reports: Amid the frenzy of academic interest in the possibilities and limitations of large language models, four researchers affiliated with Universite du Quebec, in Canada, have delved into the security of code generated by ChatGPT, the non-intelligent, text-regurgitating bot from OpenAI. In a pre-press paper titled, "How Secure is Code Generated by ChatGPT?" computer scientists Raphael Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara answer the question with research that can be summarized as "not very."

"The results were worrisome," the authors state in their paper. "We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts. In fact, when prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not." [...] In all, ChatGPT managed to generate just five secure programs out of 21 on its first attempt. After further prompting to correct its missteps, the large language model managed to produce seven more secure apps -- though that's "secure" only as it pertains to the specific vulnerability being evaluated. It's not an assertion that the final code is free of any other exploitable condition. [...]

The academics observe in their paper that part of the problem appears to arise from ChatGPT not assuming an adversarial model of code execution. The model, they say, "repeatedly informed us that security problems can be circumvented simply by 'not feeding an invalid input' to the vulnerable program it has created." Yet, they say, "ChatGPT seems aware of -- and indeed readily admits -- the presence of critical vulnerabilities in the code it suggests." It just doesn't say anything unless asked to evaluate the security of its own code suggestions.

Initially, ChatGPT's response to security concerns was to recommend only using valid inputs -- something of a non-starter in the real world. It was only afterward, when prompted to remediate problems, that the AI model provided useful guidance. That's not ideal, the authors suggest, because knowing which questions to ask presupposes familiarity with specific vulnerabilities and coding techniques. The authors also point out that there's ethical inconsistency in the fact that ChatGPT will refuse to create attack code but will create vulnerable code.
AI

Stack Overflow Will Charge AI Giants For Training Data (wired.com) 31

An anonymous reader quotes a report from Wired: Stack Overflow, a popular internet forum for computer programming help, plans to begin charging large AI developers as soon as the middle of this year for access to the 50 million questions and answers on its service, CEO Prashanth Chandrasekar says. The site has more than 20 million registered users. Stack Overflow's decision to seek compensation from companies tapping its data, part of a broader generative AI strategy, has not been previously reported. It follows an announcement by Reddit this week that it will begin charging some AI developers to access its own content starting in June.

"Community platforms that fuel LLMs absolutely should be compensated for their contributions so that companies like us can reinvest back into our communities to continue to make them thrive," Stack Overflow's Chandrasekar says. "We're very supportive of Reddit's approach." Chandrasekar described the potential additional revenue as vital to ensuring Stack Overflow can keep attracting users and maintaining high-quality information. He argues that will also help future chatbots, which need "to be trained on something that's progressing knowledge forward. They need new knowledge to be created." But fencing off valuable data also could deter some AI training and slow improvement of LLMs, which are a threat to any service that people turn to for information and conversation. Chandrasekar says proper licensing will only help accelerate development of high-quality LLMs.

Chandrasekar says that LLM developers are violating Stack Overflow's terms of service. Users own the content they post on Stack Overflow, as outlined in its TOS, but it all falls under a Creative Commons license that requires anyone later using the data to mention where it came from. When AI companies sell their models to customers, they "are unable to attribute each and every one of the community members whose questions and answers were used to train the model, thereby breaching the Creative Commons license," Chandrasekar says. Neither Stack Overflow nor Reddit has released pricing information.
"Both Stack Overflow and Reddit will continue to license data for free to some people and companies," notes Wired. "Chandrasekar says Stack Overflow only wants remuneration only from companies developing LLMs for big, commercial purposes."

"When people start charging for products that are built on community-built sites like ours, that's where it's not fair use," he says.
Programming

Linux Foundation Launches New Organization To Maintain TLA+ (techcrunch.com) 16

The Linux Foundation, the nonprofit tech consortium that manages various open source efforts, today announced the launch of the TLA+ Foundation to promote the adoption and development of the TLA+ programming language. AWS, Oracle and Microsoft are among the inaugural members. From a report: What is the TLA+ programming language, you ask? It's a formal "spec" language developed by computer scientist and mathematician Leslie Lamport. Best known for his seminal work in distributed systems, Lamport -- now a scientist at Microsoft Research -- created TLA+ to design, model, document and verify software programs -- particularly those of the concurrent and distributed variety.

To give a few examples, ElasticSearch, the organization behind the search engine of the same name, used TLA+ to verify the correctness of their distributed systems algorithms. Elsewhere, Thales, the electrical systems manufacturing firm, used TLA+ to model and develop fault-tolerant modules for its industrial control platform. "TLA+ is unique in that it's intended for specifying a system, rather than for implementing software," a Linux Foundation spokesperson told TechCrunch via email. "Based on mathematical concepts, notably set theory and temporal logic, TLA+ allows for the expression of a system's desired correctness properties in a formal and rigorous manner."

Google

Google's Bard AI Chatbot Can Now Help You Code and Create Functions For Google Sheets (theverge.com) 18

Google is updating its Bard AI chatbot to help developers write and debug code. Rivals like ChatGPT and Bing AI have supported code generation, but Google says it has been "one of the top requests" it has received since opening up access to Bard last month. From a report: Bard can now generate code, debug existing code, help explain lines of code, and even write functions for Google Sheets. "We're launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript," explains Paige Bailey, group product manager for Google Research, in a blog post. You can ask Bard to explain code snippets or explain code within GitHub repos similar to how Microsoft-owned GitHub is implementing a ChatGPT-like assistant with Copilot. Bard will also debug code that you supply or even its own code if it made some errors or the output wasn't what you were looking for.
AI

Reddit Wants To Get Paid for Helping To Teach Big AI Systems (nytimes.com) 46

Reddit has long been a forum for discussion on a huge variety of topics, and companies like Google and OpenAI have been using it in their A.I. projects. From a report: Reddit has long been a hot spot for conversation on the internet. About 57 million people visit the site every day to chat about topics as varied as makeup, video games and pointers for power washing driveways. In recent years, Reddit's array of chats also have been a free teaching aid for companies like Google, OpenAI and Microsoft. Those companies are using Reddit's conversations in the development of giant artificial intelligence systems that many in Silicon Valley think are on their way to becoming the tech industry's next big thing. Now Reddit wants to be paid for it.

The company said on Tuesday that it planned to begin charging companies for access to its application programming interface, or A.P.I., the method through which outside entities can download and process the social network's vast selection of person-to-person conversations. "The Reddit corpus of data is really valuable," Steve Huffman, founder and chief executive of Reddit, said in an interview. "But we don't need to give all of that value to some of the largest companies in the world for free." The move marks one of the first significant examples of a social network's charging for access to the conversations it hosts for the purpose of developing A.I. systems like ChatGPT, OpenAI's popular program. Those new A.I. systems could one day lead to big businesses, but they aren't likely to help companies like Reddit very much. In fact, they could be used to create competitors -- automated duplicates to Reddit's conversations.

Programming

Would This OpenJDK Proposal Make Java Easier to Learn? (infoworld.com) 145

"Java would become easier for students to learn under a proposal to introduce flexible main methods and anonymous main classes to the language," reports InfoWorld.

Details of the plan include enhancing the protocol by which Java programs are launched to be flexible, in particular to allow the String[] parameter of main methods to be omitted and allow main methods to be neither public nor static; the Hello World program would be simplified. Anonymous main classes would be introduced to make the class declaration implicit.
It's currently a disabled-by-default preview language feature in JDK 21 (scheduled for General Availability in September), included to provoke developer feedback based on real world use (which may lead to it becoming permanent in the future). This wouldn't introduce a separate beginner's dialect or beginners' toolchain of Java, emphasizes Java Enhancement Proposal (JEP) 445. "Student programs should be compiled and run with the same tools that compile and run any Java program."

But it argues that a simple "Hello World" program today has "too much clutter...too much code, too many concepts, too many constructs — for what the program does."


public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}



Anonymous main classes would make the public class declaration implicit (while also sparing newbies the "mysterious" modifier static and the args parameter String[] ). The program is streamlined to:


void main() {
System.out.println("Hello, World!");
}


The proposal argues this change reduces "the ceremony of writing simple programs such as scripts and command-line utilities." And since Java is intended to be a first programming language, this change would mean students "can write their first programs without needing to understand language features designed for large programs," using instead "streamlined declarations for single-class programs". (This allows students and educators to explore language features more gradually.) A Hello, World! program written as an anonymous main class is much more focused on what the program actually does, omitting concepts and constructs it does not need. Even so, all members are interpreted just as they are in an ordinary class. To evolve an anonymous main class into an ordinary class, all we need to do is wrap its declaration, excluding import statements, inside an explicit class declaration.
Programming

Undercutting Microsoft, Amazon Offers Free Access to Its AI Coding Assistant 'CodeWhisperer' (theverge.com) 45

Amazon is making its AI-powered coding assistant CodeWhisperer free for individual developers, reports the Verge, "undercutting the $10 per month pricing of its Microsoft-made rival." Amazon launched CodeWhisperer as a preview last year, which developers can use within various integrated development environments (IDEs), like Visual Studio Code, to generate lines of code based on a text-based prompt....

CodeWhisperer automatically filters out any code suggestions that are potentially biased or unfair and flags any code that's similar to open-source training data. It also comes with security scanning features that can identify vulnerabilities within a developer's code, while providing suggestions to help close any security gaps it uncovers. CodeWhisperer now supports several languages, including Python, Java, JavaScript, TypeScript, and C#, including Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala.

Here's how Amazon's senior developer advocate pitched the usefulness of their "real-time AI coding companion": Helping to keep developers in their flow is increasingly important as, facing increasing time pressure to get their work done, developers are often forced to break that flow to turn to an internet search, sites such as StackOverflow, or their colleagues for help in completing tasks. While this can help them obtain the starter code they need, it's disruptive as they've had to leave their IDE environment to search or ask questions in a forum or find and ask a colleague — further adding to the disruption. Instead, CodeWhisperer meets developers where they are most productive, providing recommendations in real time as they write code or comments in their IDE. During the preview we ran a productivity challenge, and participants who used CodeWhisperer were 27% more likely to complete tasks successfully and did so an average of 57% faster than those who didn't use CodeWhisperer....

It provides additional data for suggestions — for example, the repository URL and license — when code similar to training data is generated, helping lower the risk of using the code and enabling developers to reuse it with confidence.

Government

Government Cybersecurity Agencies Unite to Urge Secure Software Design Practices (cisa.gov) 38

Several government cybersecurity agencies united to urge secure-by-design and secure-by-default software. Releasing "joint guidance" for software manufactuers were two U.S. security agencies — the FBI and the NSA — joined with the U.S. Cybersecurity and Infrastructure Security Agency and the cybersecurity authorities of Australia, Canada, the United Kingdom, Germany, Netherlands, and New Zealand. "To create a future where technology and associated products are safe for customers," they wrote in a joint statement, "the authoring agencies urge manufacturers to revamp their design and development programs to permit only secure-by-design and -default products to be shipped to customers."

The Washington Post reports: Software manufacturers should put an end to default passwords, write in safer programming languages and establish vulnerability disclosure programs for reporting flaws, a collection of U.S. and international government agencies said in new guidelines Thursday. [The guidelines also urge rigorous code reviews.]

The "principles and approaches" document, which isn't mandatory but lays out the agencies' views on securing software, is the first major step by the Biden administration as part of its push to make software products secure as part of the design process, and to make their default settings secure as well. It's part of a potentially contentious multiyear effort that aims to shift the way software makers secure their products. It was a key feature of the administration's national cybersecurity strategy, which was released last month and emphasized shifting the burden of security from consumers — who have to manage frequent software updates — to the companies that make often insecure products... The administration has also raised the prospect of legislation on secure-by-design and secure-by-default, but officials have said it could be years away....

The [international affairs think tank] Atlantic Council's Cyber Statecraft Initiative has praised the Biden administration's desire to address economic incentives for insecurity. Right now, the costs of cyberattacks fall on users more than they do tech providers, according to many policymakers. "They're on a righteous mission," Trey Herr, director of the Atlantic Council initiative, told me. If today's guidelines are the beginning of the discussion on secure-by-design and secure-by-default, Herr said, "this is a really strong start, and an important one."

"It really takes aim at security features as a profit center," which for some companies has led to a lot of financial growth, Herr said. "I do think that's going to rub people the wrong way and quick, but that's good. That's a good fight."

In the statement CISA's director says consumers also have a role to play in this transition. "As software now powers the critical systems and services we collectively rely upon every day, consumers must demand that manufacturers prioritize product safety above all else."

Among other things, the new guidelines say that manufacturers "are encouraged make hard tradeoffs and investments, including those that will be 'invisible' to the customers, such as migrating to programming languages that eliminate widespread vulnerabilities."
Twitter

Twitter Partners With eToro To Let Users Trade Stocks, Crypto (cnbc.com) 23

Twitter will let its users access stocks, cryptocurrencies and other financial assets through a partnership with eToro, a social trading company. From a report: Starting Thursday, a new feature will be rolled out on the Twitter app. It will allow users to view market charts on an expanded range of financial instruments and buy and sell stocks and other assets from eToro, the company told CNBC exclusively. Currently, it's already possible to view real-time trading data from TradingView on index funds like the S&P 500 and shares of some companies such as Tesla. That can be done using Twitter's "cashtags" feature -- you search for a ticker symbol and insert dollar sign in front of it, after which the app will show you price information from TradingView using an API (application programming interface).

With the eToro partnership, Twitter cashtags will be expanded to cover far more instruments and asset classes, an eToro spokesperson told CNBC. You'll also be able to click a button that says "view on eToro," which takes you through to eToro's site, and then buy and sell assets on its platform. EToro uses TradingView as its market data partner. "As we've grown over the past three years immensely, we've seen more and more of our users interact on Twitter [and] educate themselves about the markets," Yoni Assia, eToro's CEO, told CNBC in an interview.

AI

Developer Creates 'Self-Healing' Programs That Fix Themselves Thanks To AI 137

An anonymous reader quotes a report from Ars Technica: Debugging a faulty program can be frustrating, so why not let AI do it for you? That's what a developer that goes by "BioBootloader" did by creating Wolverine, a program that can give Python programs "regenerative healing abilities," reports Hackaday. (Yep, just like the Marvel superhero.) "Run your scripts with it and when they crash, GPT-4 edits them and explains what went wrong," wrote BioBootloader in a tweet that accompanied a demonstration video. "Even if you have many bugs it'll repeatedly rerun until everything is fixed."

In the demo video for Wolverine, BioBootloader shows a side-by-side window display, with Python code on the left and Wolverine results on the right in a terminal. He loads a custom calculator script in which he adds a few bugs on purpose, then executes it. "It runs it, it sees the crash, but then it goes and talks to GPT-4 to try to figure out how to fix it," he says. GPT-4 returns an explanation for the program's errors, shows the changes that it tries to make, then re-runs the program. Upon seeing new errors, GPT-4 fixes the code again, and then it runs correctly. In the end, the original Python file contains the changes added by GPT-4.
AI

Khan Academy Chief Says GPT-4 is Ready To Be a Tutor (axios.com) 58

For all the high-profile examples of ChatGPT getting facts and even basic math wrong, Khan Academy founder Sal Khan says the latest version of the generative AI engine makes a pretty good tutor. From a report: "This technology is very powerful," Khan told Axios in a recent interview. "It's getting better." Khan Academy was among the early users of GPT-4 that OpenAI touted when it released the updated engine. This week, two more school districts (Newark, N.J. and Hobart, Indiana) are joining the pilot of Khanmigo, the AI-assisted tutor. With the two new districts, a total of 425 teachers and students are testing Khanmigo.

The chatbot works much like a real-life or online tutor, looking at students' work and helping them when they get stuck. In a math problem, for example, Khanmigo can detect not just whether a student got an answer right or wrong, but also where they may have gone astray in their reasoning. ChatGPT and its brethren have been highly controversial -- especially in education, where some schools are banning the use of the technology. Concerns range from the engines' propensity to be confidently wrong (or "hallucinate") to worries about students using the systems to write their papers. Khan said he understands these fears, but also notes that many of those criticizing the technology are also using it themselves and even letting their kids make use of it. And, for all its flaws, he says today's AI offers the opportunity for more kids -- in both rich and developing countries -- to get personalized learning. "The time you need tutoring is right when you are doing the work, often when you are in class," Khan said.

Programming

Rust Foundation Solicits Feedback on Updated Policy for Trademarks (google.com) 41

"Rust" and "Cargo" are registered trademarks held by the Rust Foundation — the independent non-profit supporting Rust's maintainers. In August 1,000 people responded to the foundation's Trademark Policy Review Survey, after which the foundation invited any interested individuals to join their Trademark Policy Working Group (which also included Rust Project leaders). They've now created a draft of an updated policy for feedback...

Crate, RS, "Rustacean," and the logo of Ferris the crab are all available for use by anyone consistent with their definition, with no special permission required. Here's how the document's quick reference describes other common use-cases:
  • Selling Goods — Unless explicitly approved, use of the Rust name or Logo is not allowed for the purposes of selling products/promotional goods for gain/profit, or for registering domain names. For example, it is not permitted to sell stickers of the Rust logo in an online shop for your personal profit.
  • Showing Support of Rust — When showing your support of the Rust Project on a personal site or blog, you may use the Rust name or Logo, as long as you abide by all the requirements listed in the Policy. You may use the Rust name or Logo in social media handles, avatars, and emojis to demonstrate Rust Project support in a manner that is decorative, so long as you don't suggest commercial Rust affiliation.
  • Inclusion of the Marks in Educational Materials — You may use the Rust name in book and article titles and the Logo in graphic components, so long as you make it clear that the Rust Project or Foundation has not reviewed/approved/endorsed your content.

There's also a FAQ, answering questions like "Can I use the Rust logo as my Twitter Avatar?" The updated policy draft says "We consider social media avatars on personal accounts to be fair use. On the other hand, using Rust trademarks in corporate social media bios/profile pictures is prohibited.... In general, we prohibit the modification of the Rust logo for any purpose, except to scale it. This includes distortion, transparency, color-changes affiliated with for-profit brands or political ideologies. On the other hand, if you would like to change the colors of the Rust logo to communicate allegiance with a community movement, we simply ask that you run the proposed logo change by us..."

And for swag at events using the Rust logo, "Merch developed for freebies/giveaways is normally fine, however you need approval to use the Rust Word and/or Logo to run a for-profit event. You are free to use Ferris the crab without permission... If your event is for-profit, you will need approval to use the Rust name or Logo. If you are simply covering costs and the event is non-profit, you may use the Rust name or Logo as long as it is clear that the event is not endorsed by the Rust Foundation. You are free to use Ferris the crab without permission."


Programming

Raspberry Pi Launches Online Code Editor to Help Kids Learn (tomshardware.com) 28

An anonymous reader shares this report from Tom's Hardware: When we think about Raspberry Pi, we normally picture single-board computers, but the Raspberry Pi Foundation was started to help kids learn about computers and it wants to help whether or not you own its hardware. The non-profit arm of Raspberry Pi this week released its new, browser-based code editor that's designed for young people (or any people) who are learning.

The Raspberry Pi Code Editor, which is considered to be in beta, is available to everyone for free right now at editor.raspberrypi.org. The editor is currently designed to work with Python only, but the organization says that support for other languages such as HTML, JavaScript and CSS is coming....

The Raspberry Pi Foundation already had a nice set of Python tutorials on its site, but it has adapted some of them to open sample code directly in the online editor....The Pi Foundation says that it plans to add a number of features to the Code Editor, including sharing and collaboration. The organization also plans to release the editor as an open-source project so anyone can modify it.

There's a pane showing your code's output when you click the "Run" button (plus a smaller pane for adding additional files to a project).

Tom's Hardware notes that "Since the entire programming experience takes place online, there's no way (at least right now) to use Python to control local hardware on your PC or your Raspberry Pi." But on the plus side, "If you create a free account on raspberrypi.org, which I did, the system will save all of your projects in the cloud and you can reload them any time you want. You can also download all the files in a project as a .zip file."
Programming

C Rival 'Zig' Cracks Tiobe Index Top 50, Go Remains in Top 10 (infoworld.com) 167

InfoWorld reports: Zig, a general purpose programming language that interacts with C/C++ programs and promises to be a modern alternative to C, has made an appearance in the Tiobe index of programming language popularity. Zig entered the top 50 in the April edition of the Tiobe Programming Community Index, ranking 46th, albeit with a rating of just 0.19%. By contrast, the Google-promoted Carbon language, positioned as an experimental successor to C++, ranked just 168th.
Tiobe CEO Paul Jansen argues that high-performance languages "are booming due to the vast amounts of data that needs to be processed nowadays. As a result, C and C++ are doing well in the top 10 and Rust seems to be a keeper in the top 20." Zig has all the nice features of C and C++ (such as explicit memory management enhanced with option types) and has abandoned the not-so-nice features (such as the dreadful preprocessing). Entering the top 50 is no guarantee to become a success, but it is at least a first noteworthy step. Good luck Zig!
Tiobe bases its monthly ranking of programming language popularity on search engine results for courses, third party vendors, and engineers. Here's what they's calculated for the most popular programming languages in April of 2023:
  • Python
  • C
  • Java
  • C++
  • C#
  • Visual Basic
  • JavaScript
  • SQL
  • PHP
  • Go

April's top 10 was nearly identical to the rankings a year ago, but assembly language fell from 2022's #8 position to #12 in 2023. SQL and PHP rose one rank (into 2023's #8 and #9 positions) — and as in March, the rankings now shows Go as the 10th most popular programming language.


Programming

ECMAScript 2023 Spec for JavaScript Includes New Methods for Arrays (infoworld.com) 34

Four new capabilities are planned for the JavaScript specification's next update, reports InfoWorld. Based on a list of finished proposals, InfoWorld expects the following in ECMAScript 2023: - Array find from last, a proposal for .findlast() and .findLastIndex() methods on array and typed array...

- Permitting symbols as keys in WeakMap keys, a proposal that extends the WeakMap API to allow the use of unique symbols as keys. Currently, WeakMaps are limited to allow only objects as keys.

- Change array by copy, a proposal that provides additional methods on Array.prototype and TypedArray.prototype to enable changes on the array by returning a new copy of it with the change.

- Hashbang grammar, a proposal to match the de facto usage in some CLI JS hosts that allow for Shebangs/Hashbang. These hosts strip the hashbang to generate valid JS source texts before passing to JS engines. This plan would move the stripping to engines and unify and standardize how that is done.

Programming

Collabora Developer Explores Rust Support for the Linux Kernel's V4L2/Media Subsystem (phoronix.com) 5

On Thursday patches were submitted for bringing Rust infrastructure to the Video 4 Linux 2 (V4L2) framework (within Linux's media subsystem) by Collabora's Daniel Almeida.

Phoronix reports: This provides just enough for working with a prototype VirtIO camera driver written in Rust along with a Rust sample driver. These initial patches are just intended to start the discussion around V4L2 Rust driver support and the actual upstreaming of the Rust support for these camera drivers may still be some ways down the line.
Programming

Samsung Software Engineers Busted For Pasting Proprietary Code Into ChatGPT (pcmag.com) 65

Multiple employees of Samsung's Korea-based semiconductor business plugged lines of confidential code into ChatGPT, effectively leaking corporate secrets that could be included in the chatbot's future responses to other people around the world. PCMag reports: One employee copied buggy source code from a semiconductor database into the chatbot and asked it to identify a fix, according to The Economist Korea. Another employee did the same for a different piece of equipment, requesting "code optimization" from ChatGPT. After a third employee asked the AI model to summarize meeting notes, Samsung executives stepped in. The company limited each employee's prompt to ChatGPT to 1,024 bytes.

Just three weeks earlier, Samsung had lifted its ban on employees using ChatGPT over concerns around this issue. After the recent incidents, it's considering re-instating the ban, as well as disciplinary action for the employees, The Economist Korea says. "If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network," reads an internal memo. "As soon as content is entered into ChatGPT, data is transmitted and stored to an external server, making it impossible for the company to retrieve it."

The OpenAI user guide warns users against this behavior: "We are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations." It says the system uses all questions and text submitted to it as training data.

Slashdot Top Deals