×
Microsoft

Is Microsoft Working on 'Performant Sound Recognition' AI Technologies? (windowsreport.com) 28

Windows Report speculates on what Microsoft may be working on next based on a recently-published patent for "performant sound recognition AI technologies" (dated April 2, 2024): Microsoft's new technology can recognize different types of sounds, from doorbells to babies crying, or dogs barking, but not limited to them. It can also recognize sounds of coughing or breathing difficulties, or unusual noises, such as glass breaking. Most intriguing, it can recognize and monitor environmental sounds, and they can be further processed to let users know if a natural disaster is about to happen...

The neural network generates scores and probabilities for each type of sound event in each segment. This is like guessing what type of sound each segment is and how sure it is about the guess. After that, the system does some post-processing to smooth out the scores and probabilities and generate confidence values for each type of sound for different window sizes.

Ultimately, this technology can be used in various applications. In a smart home device, it can detect when someone breaks into the house, by recognizing the sound of glass shattering, or if a newborn is hungry, or distressed, by recognizing the sounds of baby crying. It can also be used in healthcare, to accurately detect lung or heart diseases, by recognizing heartbeat sounds, coughing, or breathing difficulties. But one of its most important applications would be to prevent casual users of upcoming natural disasters by recognizing and detecting sounds associated with them.

Thanks to Slashdot reader John Nautu for sharing the article.
Privacy

Four Baseball Teams Now Let Ticket-Holders Enter Using AI-Powered 'Facial Authentication' (sfgate.com) 42

"The San Francisco Giants are one of four teams in Major League Baseball this season offering fans a free shortcut through the gates into the ballpark," writes SFGate.

"The cost? Signing up for the league's 'facial authentication' software through its ticketing app." The Giants are using MLB's new Go-Ahead Entry program, which intends to cut down on wait times for fans entering games. The pitch is simple: Take a selfie through the MLB Ballpark app (which already has your tickets on it), upload the selfie and, once you're approved, breeze through the ticketing lines and into the ballpark. Fans will barely have to slow down at the entrance gate on their way to their seats...

The Philadelphia Phillies were MLB's test team for the technology in 2023. They're joined by the Giants, Nationals and Astros in 2024...

[Major League Baseball] says it won't be saving or storing pictures of faces in a database — and it clearly would really like you to not call this technology facial recognition. "This is not the type of facial recognition that's scanning a crowd and specifically looking for certain kinds of people," Karri Zaremba, a senior vice president at MLB, told ESPN. "It's facial authentication. ... That's the only way in which it's being utilized."

Privacy advocates "have pointed out that the creep of facial recognition technology may be something to be wary of," the article acknowledges. But it adds that using the technology is still completely optional.

And they also spoke to the San Francisco Giants' senior vice president of ticket sales, who gushed about the possibility of app users "walking into the ballpark without taking your phone out, or all four of us taking our phones out."
Education

AI's Impact on CS Education Likened to Calculator's Impact on Math Education (acm.org) 102

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."
Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.
Transportation

Elon Musk Says Tesla Will Unveil Its Robotaxi on August 8 (cnbc.com) 154

The San Francisco Chronicle reports that Tesla "is poised to roll out its version of a robotaxi later this year, according to CEO Elon Musk." ("Musk made the announcement on social media saying 'Tesla Robotaxi unveil on 8/8.' His cryptic post contained no other details about the forthcoming line of autonomous vehicles.")

Electrek thinks they know what it'll look like. "Through Walter Issacson's approved biography of Musk, we learned that Tesla Robotaxi will be 'Cybertruck-like'."

8/8 (of the year 2024) would be a Thursday — although CNBC adds one additional clarification: At Tesla, "unveil" dates do not predict a near-future date for a commercial release of a new product. For example, Tesla unveiled its fully electric heavy-duty truck, the Semi, in 2017 and did not begin deliveries until December 2022. It still produces and sells very few Semis to this day.
"Tesla shares rose over 3% in extended trading after Musk's tweet."
AMD

AMD To Open Source Micro Engine Scheduler Firmware For Radeon GPUs 23

AMD plans to document and open source its Micro Engine Scheduler (MES) firmware for GPUs, giving users more control over Radeon graphics cards. From a report: It's part of a larger effort AMD confirmed earlier this week about making its GPUs more open source at both a software level in respect to the ROCm stack for GPU programming and a hardware level. Details were scarce with this initial announcement, and the only concrete thing it introduced was a GitHub tracker.

However, yesterday AMD divulged more details, specifying that one of the things it would be making open source was the MES firmware for Radeon GPUs. AMD says it will be publishing documentation for MES around the end of May, and will then release the source code some time afterward. For one George Hotz and his startup, Tiny Corp, this is great news. Throughout March, Hotz had agitated for AMD to make MES open source in order to fix issues he was experiencing with his RX 7900 XTX-powered AI server box. He had talked several times to AMD representatives, and even the company's CEO, Lisa Su.
AI

Meta Will Require Labels on More AI-Generated Content (theverge.com) 4

Meta is updating its AI-generated content policy and will add a "Made with AI" label beginning next month, the company announced. The policy will apply to content on Instagram, Facebook, and Threads. From a report: Acknowledging that its current policy is "too narrow," Meta says it will start labeling more video, audio, and image content as being AI-generated. Labels will be applied either when users disclose the use of AI tools or when Meta detects "industry standard AI image indicators," though the company didn't provide more detail about its detection system.

The changes are informed by recommendations and feedback from Meta's Oversight Board and update the manipulated media policy created in 2020. The old policy prohibits videos created or edited using AI tools that make a person say something they didn't but doesn't cover the wide range of AI-generated content that has recently flooded the web. "In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving," Meta wrote in a blog post. "As the Board noted, it's equally important to address manipulation that shows a person doing something they didn't do."

AI

AI Seen Cutting Worker Numbers, Survey By Staffing Company Shows (reuters.com) 89

AI will lead to many companies employing fewer people in the next five years, staffing provider Adecco Group said on Friday, in a new survey highlighting the upheaval AI will bring to the workplace. From a report: Some 41% of senior executives expect to have smaller workforces because of AI technology, Adecco said in a report based on a survey of executives at 2,000 large companies worldwide. Generative AI, which can create text, photos and videos in response to open-ended prompts, has spurred both hope it could eliminate repetitive tasks and fear it will make some jobs obsolete. [...] The Adecco survey is one of the largest into the AI topic, and follows a 2023 World Economic Forum study which said 25% of companies expected AI to trigger job losses, while 50% expected the technology to create new roles.
China

China Will Use AI To Disrupt Elections in the US, South Korea and India, Microsoft Warns (theguardian.com) 157

China will attempt to disrupt elections in the US, South Korea and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned. From a report: The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company's threat intelligence team published on Friday. "As populations in India, South Korea and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections," the report reads.

Microsoft said that "at a minimum" China will create and distribute through social media AI-generated content that "benefits their positions in these high-profile elections." The company added that the impact of AI-made content was minor but warned that could change. "While the impact of such content in swaying audiences remains low, China's increasing experimentation in augmenting memes, videos and audio will continue -- and may prove effective down the line," said Microsoft. Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.

UPDATE: Last fall, America's State Department "accused the Chinese government of spending billions of dollars annually on a global campaign of disinformation," reports the Wall Street Journal: In an interview, Tom Burt, Microsoft's head of customer security and trust, said China's disinformation operations have become much more active in the past six months, mirroring rising activity of cyberattacks linked to Beijing. "We're seeing them experiment," Burt said. "I'm worried about where it might go next."
AI

A 'Law Firm' of AI Generated Lawyers Is Sending Fake Threats As an SEO Scam (404media.co) 12

An anonymous reader quotes a report from 404 Media: Last week, Ernie Smith, the publisher of the website Tedium, got a "copyright infringement notice" from a law firm called Commonwealth Legal: "We're reaching out on behalf of the Intellectual Property division of a notable entity, in relation to an image connected to our client," it read. [...] In this case, though, the email didn't demand that the photo be taken down or specifically threaten a lawsuit. Instead, it demanded that Smith place a "visible and clickable link" beneath the photo in question to a website called "tech4gods" or the law firm would "take action." Smith began looking into the law firm. And he found that Commonwealth Legal is not real, and that the images of its "lawyers" are AI generated.

The threat to "activate the case No. 86342" is obviously nonsense. Beyond that, Commonwealth Legal's website looks generic and is full of stock photos, though I've seen a lot of generic template websites for real law firms. All of its lawyers have vacant, thousand-yard stares that are commonly generated by websites like This Person Does Not Exist, none of them come up in any attorney or LinkedIn searches, and the only reverse image search results for them are for a now-broken website called Generated.Photos, which offered a service to "use AI to generate people online that don't exist, change clothing and modify face and body traits. Download generated people in different postures." "All of the faces scanned were likely AI generated, most likely by a Generative Adversarial Network (GAN) model," Ali Shahriyari, cofounder and CTO of the AI detection startup Reality Defender told 404 Media. Commonwealth Legal's listed address is the fourth floor of a one-story building that looks nothing like the image on its website, and both of its phone numbers are disconnected. No one responded to the contact form that I filled out. Smith realized that what's happening here isn't a copyright enforcement or copyright trolling attempt at all. Instead, it's a backlink SEO scam, where a website owner tries to improve their Google ranking by asking, paying, or threatening someone to link to their website.

Tech4Gods.com is a gadget review website run by a man named Daniel Barczak, whose content is "complemented by AI writing assistants." In this case, the photo that Smith had "infringed" was a photo downloaded from the royalty free, free-to-use website Unsplash, which 404 Media also sometimes uses. The image was not taken by Barczak, and has nothing to do with him, he told me in an email: "I certainly don't own any images on the web," he said. The original photographer did not respond to a request for comment sent through Unsplash. Barczak told me that he had been previously buying backlinks to his website for SEO, but said he wasn't aware of who was doing this or why. "I have no idea; it certainly has nothing to do with me," he said. "However, recently, someone has been building spammy links against my site that I have been dealing with." "I have mastered on-page SEO, but unfortunately, I buy links due to a lack of time," he added. "In the past, I had a bad link builder. I wonder if it's him going mad at me for letting him go It's hard to say the web is massive, and everyone can link whenever they want." Link building is an SEO strategy devised to get outside websites to link to your website. He added that "bad links may damage [the site's] profile in Google's eyes." In this case, however, the "lawyers" were threatening a well-established tech blogger, and a link from Tedium would likely be treated as a positive in the search algorithm's eyes.

IT

PCIe 7.0 On Track For a 2025 Release (pcgamer.com) 29

An anonymous reader shares a PC Gamer report: PCI Express 7.0 is coming. But don't feel as though you need to start saving for a new motherboard anytime soon. The PCI-SIG has just released the 0.5 version, with the final version set for release in 2025. That means supporting devices are not likely to land until 2026, with 2027-28 likely to be the years we see a wider rollout. PCIe 7.0 will initially be far more relevant to the enterprise market, where bandwidth-hungry applications like AI and networking will benefit. Anyway, it's not like the PC market is saturated with PCIe 5.0 devices, and PCIe 6.0 is yet to make its way into our gaming PCs.

PCI Express bandwidth doubles every generation, so PCIe 7.0 will deliver a maximum data rate up to 128 GT/s. That's a whopping 8x faster than PCIe 4.0 and 4x faster than PCIe 5.0. This means PCIe 7.0 is capable of delivering up to 512GB/s of bi-directional throughput via a x16 connection and 128GB/s for an x4 connection. More bandwidth will certainly be beneficial for CPU to chipset links, which means multiple integrated devices like 10G networking, WiFi 7, USB 4, and Thunderbolt 4 will all be able to run on a consumer motherboard without compromise. And just imagine what all that bandwidth could mean for PCIe 7.0 SSDs. In the years to come, a PCIe 7.0 x4 SSD could approach sequential transfer rates of up to 60GB/s. We'll need some serious advances in SSD controller and NAND flash technologies to see speeds in that range, but still, it's an attractive proposition.
Further reading: PCIe 7.0 first official draft lands, doubling bandwidth yet again.
Youtube

YouTube Says OpenAI Training Sora With Its Videos Would Break Rules (yahoo.com) 19

The use of YouTube videos to train OpenAI's text-to-video generator would be an infraction of the platform's terms of service, YouTube Chief Executive Officer Neal Mohan said. Bloomberg: In his first public remarks on the topic, Mohan said he had no firsthand knowledge of whether OpenAI had, in fact, used YouTube videos to refine its artificial intelligence-powered video creation tool, called Sora. But if that were the case, it would be a "clear violation" of YouTube's terms of use, he said.

"From a creator's perspective, when a creator uploads their hard work to our platform, they have certain expectations," Mohan said Thursday. "One of those expectations is that the terms of service is going to be abided by. It does not allow for things like transcripts or video bits to be downloaded, and that is a clear violation of our terms of service. Those are the rules of the road in terms of content on our platform."

AI

Google Books Is Indexing AI-Generated Garbage (404media.co) 11

Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history. From a report: I was able to find the AI-generated books with the same method we've previously used to find AI-generated Amazon product reviews, papers published in academic journals, and online articles. Searching Google Books for the term "As of my last knowledge update," which is associated with ChatGPT-generated answers, returns dozens of books that include that phrase. Some of the books are about ChatGPT, machine learning, AI, and other related subjects and include the phrase because they are discussing ChatGPT and its outputs. These books appear to be written by humans. However, most of the books in the first eight pages of results turned up by the search appear to be AI-generated and are not about AI.

For example, the 2024 book Bears, Bulls, and Wolves: Stock Trading for the Twenty-Year-Old by Tristin McIver, bills itself as "a transformative journey into the world of stock trading" and "a comprehensive guide designed for beginners eager to unlock the mysteries of financial markets." In reality, it reads like ChatGPT-generated text with surface, Wikipedia-level analysis of complex financial events like Facebook's initial public offering or the 2008 financial crisis summed up in a few short paragraphs. [...] Other books appear to be outdated to the point of being useless at the time they are published because they are generated with a version of ChatGPT with an old "knowledge update."

AI

Google Considers Charging For AI-Powered Search 46

An anonymous reader quotes a report from the Financial Times: Google is considering charging for new "premium" features powered by generative artificial intelligence, in what would be the biggest ever shake-up of its search business. The proposed revamp to its cash cow search engine would mark the first time the company has put any of its core product behind a paywall, and shows it is still grappling with a technology that threatens its advertising business, almost a year and a half after the debut of ChatGPT. Google is looking at options including adding certain AI-powered search features to its premium subscription services, which already offer access to its new Gemini AI assistant in Gmail and Docs, according to three people with knowledge of its plans. Engineers are developing the technology needed to deploy the service but executives have not yet made a final decision on whether or when to launch it, one of the people said. Google's traditional search engine would remain free of charge, while ads would continue to appear alongside search results even for subscribers. But charging would represent the first time that Google -- which for many years offered free consumer services funded entirely by advertising -- has made people pay for enhancements to its core search product. "For years, we've been reinventing Search to help people access information in the way that's most natural to them," said Google. "With our generative AI experiments in Search, we've already served billions of queries, and we're seeing positive Search query growth in all of our major markets. We're continuing to rapidly improve the product to serve new user needs."

It added: "We don't have anything to announce right now."
AI

ChatGPT Customers Can Now Use AI To Edit DALL-E Images 12

Paid ChatGPT users can now edit AI-generated images using text prompts from within ChatGPT. Axios reports: In a demo shared on X (formerly Twitter), OpenAI showed off the new capability, using it to add bows to a poodle's ears in an image created by DALL-E. DALL-E will also begin letting people choose the aspect ratio of the desired image as well as to add styles, such as "motion blur" or "solarpunk."
Businesses

Stability AI Reportedly Ran Out of Cash To Pay Its Bills For Rented Cloud GPUs (theregister.com) 45

An anonymous reader writes: The massive GPU clusters needed to train Stability AI's popular text-to-image generation model Stable Diffusion are apparently also at least partially responsible for former CEO Emad Mostaque's downfall -- because he couldn't find a way to pay for them. According to an extensive expose citing company documents and dozens of persons familiar with the matter, it's indicated that the British model builder's extreme infrastructure costs drained its coffers, leaving the biz with just $4 million in reserve by last October. Stability rented its infrastructure from Amazon Web Services, Google Cloud Platform, and GPU-centric cloud operator CoreWeave, at a reported cost of around $99 million a year. That's on top of the $54 million in wages and operating expenses required to keep the AI upstart afloat.

What's more, it appears that a sizable portion of the cloudy resources Stability AI paid for were being given away to anyone outside the startup interested in experimenting with Stability's models. One external researcher cited in the report estimated that a now-cancelled project was provided with at least $2.5 million worth of compute over the span of four months. Stability AI's infrastructure spending was not matched by revenue or fresh funding. The startup was projected to make just $11 million in sales for the 2023 calendar year. Its financials were apparently so bad that it allegedly underpaid its July 2023 bills to AWS by $1 million and had no intention of paying its August bill for $7 million. Google Cloud and CoreWeave were also not paid in full, with debts to the pair reaching $1.6 million as of October, it's reported.

It's not clear whether those bills were ultimately paid, but it's reported that the company -- once valued at a billion dollars -- weighed delaying tax payments to the UK government rather than skimping on its American payroll and risking legal penalties. The failing was pinned on Mostaque's inability to devise and execute a viable business plan. The company also failed to land deals with clients including Canva, NightCafe, Tome, and the Singaporean government, which contemplated a custom model, the report asserts. Stability's financial predicament spiraled, eroding trust among investors, making it difficult for the generative AI darling to raise additional capital, it is claimed. According to the report, Mostaque hoped to bring in a $95 million lifeline at the end of last year, but only managed to bring in $50 million from Intel. Only $20 million of that sum was disbursed, a significant shortfall given that the processor titan has a vested interest in Stability, with the AI biz slated to be a key customer for a supercomputer powered by 4,000 of its Gaudi2 accelerators.
The report goes on to mention further fundraising challenges, issues retaining employees, and copyright infringement lawsuits challenging the company's future prospects. The full expose can be read via Forbes (paywalled).
AI

George Carlin Estate Forces 'AI Carlin' Off the Internet For Good (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: The George Carlin estate has settled its lawsuit with Dudesy, the podcast that purportedly used a "comedy AI" to produce an hour-long stand-up special in the style and voice of the late comedian. Dudesy's "George Carlin: Dead and Loving It" special, which was first uploaded in early January, gained hundreds of thousands of views and plenty of media attention for its presentation as a creation of an AI that had "listened to all of George Carlin's material... to imitate his voice, cadence and attitude as well as the subject matter I think would have interested him today." But even before the Carlin estate lawsuit was filed, there were numerous signs that the special was not actually written by an AI, as Ars laid out in detail in a feature report.

Shortly after the Carlin estate filed its lawsuit against Dudesy in late January, a representative for Dudesy host Will Sasso told The New York Times that the special had actually been "completely written by [Dudesy co-host] Chad Kultgen." Regardless of the special's actual authorship, though, the lawsuit also took Dudesy to task for "capitaliz[ing] on the name, reputation, and likeness of George Carlin in creating, promoting, and distributing the Dudesy Special and using generated images of Carlin, Carlin's voice, and images designed to evoke Carlin's presence on a stage." The resulting "association" between the real Carlin and this ersatz version put Dudesy in potential legal jeopardy, even if the contentious and unsettled copyright issues regarding AI training and authorship weren't in play.

Court documents note that shortly after the lawsuit was filed, Dudesy had already "taken reasonable steps" to remove the special and any mention of Carlin from all of Dudesy's online accounts. The settlement restrains the Dudesy podcast (and those associated with it) from re-uploading the special anywhere and from "using George Carlin's image, voice, or likeness" in any content posted anywhere on the Internet. Archived copies of the special are still available on the Internet if you know where to look. While the settlement notes that those reposts are also in "violat[ion] of this order," Dudesy will not be held liable for any reuploads made by unrelated third parties.

AI

Anthropic Researchers Wear Down AI Ethics With Repeated Questions (techcrunch.com) 42

How do you get an AI to answer a question it's not supposed to? There are many such "jailbreak" techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first. From a report: They call the approach "many-shot jailbreaking" and have both written a paper about it [PDF] and also informed their peers in the AI community about it so it can be mitigated. The vulnerability is a new one, resulting from the increased "context window" of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic's researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it's the hundredth question.

AI

US, EU To Use AI To Seek Alternate Chemicals for Making Chips (bnnbloomberg.ca) 17

The European Union and the US plan to enlist AI in the search for replacements to so-called forever chemicals that are prevalent in semiconductor manufacturing, Bloomberg News reported Wednesday, citing a draft statement. From the report: The pledge forms part of the conclusions to this week's joint US-EU Trade and Technology Council taking place in Leuven, Belgium. "We plan to continue working to identify research cooperation opportunities on alternatives to the use of per- and polyfluorinated substances (PFAS) in chips," the statement says. "For example, we plan to explore the use of AI capacities and digital twins to accelerate the discovery of suitable materials to replace PFAS in semiconductor manufacturing," it says.

PFAS, sometimes known as forever chemicals, have been at the center of concerns over pollution in both the US and Europe. They have a wide range of industrial applications but also show up in our bodies, in food and water supplies, and -- as their moniker suggests -- they don't break down for a very long time.

AI

Business Schools Are Going All In on AI (wsj.com) 39

Top business schools are integrating AI into their curricula to prepare students for the changing job market. Schools like the Wharton School, American University's Kogod School of Business, Columbia Business School, and Duke University's Fuqua School of Business are emphasizing AI skills across various courses, WSJ reported Wednesday. Professors are encouraging students to use AI as a tool for generating ideas, preparing for negotiations, and pressure-testing business concepts. However, they stress that human judgment remains crucial in directing AI and making sound decisions. An excerpt from the story: Before, engineers had an edge against business graduates because of their technical expertise, but now M.B.A.s can use AI to compete in that zone, said Robert Bray, who teaches operations management at Northwestern's Kellogg School of Management. He encourages his students to offload as much work as possible to AI, treating it like "a really proficient intern." Ben Morton, one of Bray's students, is bullish on AI but knows he needs to be able to work without it. He did some coding with ChatGPT for class and wondered: If ChatGPT were down for a week, could he still get work done?

Learning to code with the help of generative AI sped up his development. "I know so much more about programming than I did six months ago," said Morton, 27. "Everyone's capabilities are exponentially increasing." Several professors said they can teach more material with AI's assistance. One said that because AI could solve his lab assignments, he no longer needed much of the class time for those activities. With the extra hours he has students present to their peers on AI innovations. Campus is where students should think through how to use AI responsibly, said Bill Boulding, dean of Duke's Fuqua School. "How do we embrace it? That is the right way to approach this -- we can't stop this," he said. "It has eaten our world. It will eat everyone else's world."

AI

UK and US Sign Landmark Agreement On AI Safety (bbc.com) 6

The UK and US have signed a landmark deal to work together on testing advanced artificial intelligence (AI) and develop "robust" safety methods for AI tools and their underlying systems. "It is the first bilateral agreement of its kind," reports the BBC. From the report: UK tech minister Michelle Donelan said it is "the defining technology challenge of our generation." "We have always been clear that ensuring the safe development of AI is a shared global issue," she said. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives."

The secretary of state for science, innovation and technology added that the agreement builds upon commitments made at the AI Safety Summit held in Bletchley Park in November 2023. The event, attended by AI bosses including OpenAI's Sam Altman, Google DeepMind's Demis Hassabis and tech billionaire Elon Musk, saw both the UK and US create AI Safety Institutes which aim to evaluate open and closed-source AI systems. [...]

Gina Raimondo, the US commerce secretary, said the agreement will give the governments a better understanding of AI systems, which will allow them to give better guidance. "It will accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," she said. "Our partnership makes clear that we aren't running away from these concerns - we're running at them."

Slashdot Top Deals