News Source Slashdot:Hardware
World's First 2D, Atom-Thin Non-Silicon Computer Developed
In a world first, a research team used 2D materials — only an atom thick — to develop a computer. The team (led by researchers at Pennsylvania State University) says it's a major step toward thinner, faster and more energy-efficient electronics. From the University's announcement:They created a complementary metal-oxide semiconductor (CMOS) computer — technology at the heart of nearly every modern electronic device — without relying on silicon. Instead, they used two different 2D materials to develop both types of transistors needed to control the electric current flow in CMOS computers: molybdenum disulfide for n-type transistors and tungsten diselenide for p-type transistors... "[A]s silicon devices shrink, their performance begins to degrade," [said lead researcher/engineering professor Saptarshi Das]. "Two-dimensional materials, by contrast, maintain their exceptional electronic properties at atomic thickness, offering a promising path forward...." The team used metal-organic chemical vapor deposition (MOCVD) — a fabrication process that involves vaporizing ingredients, forcing a chemical reaction and depositing the products onto a substrate — to grow large sheets of molybdenum disulfide and tungsten diselenide and fabricate over 1,000 of each type of transistor. By carefully tuning the device fabrication and post-processing steps, they were able to adjust the threshold voltages of both n- and p-type transistors, enabling the construction of fully functional CMOS logic circuits. "Our 2D CMOS computer operates at low-supply voltages with minimal power consumption and can perform simple logic operations at frequencies up to 25 kilohertz," said first author Subir Ghosh, a doctoral student pursuing a degree in engineering science and mechanics under Das's mentorship. Ghosh noted that the operating frequency is low compared to conventional silicon CMOS circuits, but their computer — known as a one instruction set computer — can still perform simple logic operations.
Read more...
Chinese AI Companies Dodge US Chip Curbs Flying Suitcases of Hard Drives Abroad
An anonymous reader quotes a report from the Wall Street Journal: Since 2022, the U.S. has tightened the noose around the sale of high-end AI chips and other technology to China overnational-security concerns. Yet Chinese companies have made advances using workarounds. In some cases, Chinese AI developers have been able to substitute domestic chips for the American ones. Another workaround is to smuggle AI hardware into China through third countries. But people in the industry say that has become more difficult in recent months, in part because of U.S. pressure. That is pushing Chinese companies to try a further option: bringing their data outside China so they can use American AI chips in places such as Southeast Asia and the Middle East (source paywalled; alternative source). The maneuvers are testing the limits of U.S. restrictions. "This was something we were consistently concerned about," said Thea Kendler, who was in charge of export controls at the Commerce Department in the Biden administration, referring to Chinese companies remotely accessing advanced American AI chips. Layers of intermediaries typically separate the Chinese users of American AI chips from the U.S. companies -- led by Nvidia -- that make them. That leaves it opaque whether anyone is violating U.S. rules or guidance. [...] At the Chinese AI developer, the Malaysia game plans take months of preparation, say people involved in them. Engineers decided it would be fastest to fly physical hard drives with data into the country, since transferring huge volumes of data over the internet could take months. Before traveling, the company's engineers in China spent more than eight weeks optimizing the data sets and adjusting the AI training program, knowing it would be hard to make major tweaks once the data was out of the country. The Chinese engineers had turned to the same Malaysian data center last July, working through a Singaporean subsidiary. As Nvidia and its vendors began to conduct stricter audits on the end users of AI chips, the Chinese company was asked by the Malaysian data center late last year to work through a Malaysian entity, which the companies thought might trigger less scrutiny. The Chinese company registered an entity in Kuala Lumpur, Malaysia's capital, listing three Malaysian citizens as directors and an offshore holding company as its parent, according to a corporate registry document. To avoid raising suspicions at Malaysian customs, the Chinese engineers packed their hard drives into four different suitcases. Last year, they traveled with the hard drives bundled into one piece of luggage. They returned to China recently with the results -- several hundred gigabytes of data, including model parameters that guide the AI system's output. The procedure, while cumbersome, avoided having to bring hardware such as chips or servers into China. That is getting more difficult because authorities in Southeast Asia are cracking down on transshipments through the region into China.
Read more...
There Aren't Enough Cables To Meet Growing Electricity Demand
High-voltage electricity cables have become a major constraint throttling the clean energy transition, with manufacturing facilities booked out for years as demand far exceeds supply capacity. The energy transition, trade barriers, and overdue grid upgrades have turbocharged demand for these highly sophisticated cables that connect wind farms, solar installations, and cross-border power networks. The International Energy Agency estimates that 80 million kilometers of grid infrastructure must be built between now and 2040 to meet clean energy targets -- equivalent to rebuilding the entire existing global grid that took a century to construct, but compressed into just 15 years. Each high-voltage cable requires custom engineering and months-long production in specialized 200-meter towers, with manufacturers reporting that 80-90% of major projects now use high-voltage direct current technology versus traditional alternating current systems.
Read more...
Anker Recalls More Than 1.1 Million Power Banks
Anker is recalling 1.15 million "PowerCore 10000" portable chargers due to fire and explosion risks linked to overheating lithium-ion batteries, with 19 incidents reported. "That includes two minor burn injuries and 11 reports of property damage amounting to over $60,700," reports CBS News. Consumers are urged to stop using the affected devices, check their serial numbers, and request a free replacement through Anker's website. From the report: According to a notice from the U.S. Consumer Product Safety Commission (CPSC), the lithium-ion battery inside certain "PowerCore 10000" made by Anker, a China-based electronics maker, can overheat. That can lead to the "melting of plastic components, smoke and fire hazards," Anker said in an announcement. The company added that it was conducting the recall "out of an abundance of caution to ensure the safety of our customers." The recalled "PowerCore 10000" power banks have a model number of A1263. They were sold online at Anker's website -- as well as Amazon, eBay and Newegg -- between June 2016 and December 2022 for about $27 across the U.S., according to the recall notice. Consumers can check their serial number at Anker's site to determine whether their power bank is included in the recall.
Read more...
macOS Tahoe Brings a New Disk Image Format
Apple's macOS 26 "Tahoe" introduces a new disk image format called ASIF, designed to dramatically improve performance over previous formats like UDRW and sparse bundles -- achieving near-native read/write speeds for virtual machines and general disk image use. The Eclectic Light Company reports: Apple provides few technical details, other than stating that the intrinsic structure of ASIF disk images doesn't depend on the host file system's capabilities, and their size on the host depends on the size of the data stored in the disk. In other words, they're a sparse file in APFS, and are flagged as such. [...] Conclusions: - Where possible, in macOS 26 Tahoe in particular, VMs should use ASIF disk images rather than RAW/UDRW.- Unless a sparse bundle is required (for example when it's hosted on a different file system such as that in a NAS), ASIF should be first choice for general purpose disk images in Tahoe.- It would be preferable for virtualizers to be able to call a proper API rather than a command tool.- Keep an eye on C-Command's DropDMG. I'm sure it will support ASIF disk images soon.
Read more...
The Audacious Reboot of America's Nuclear Energy Program
The United States is mounting an ambitious effort to reclaim nuclear energy leadership after falling dangerously behind China, which now has 31 reactors under construction and plans 40 more within a decade. America produces less nuclear power than it did a decade ago and abandoned uranium mining and enrichment capabilities, leaving Russia controlling roughly half the world's enriched uranium market. This strategic vulnerability has triggered an unprecedented response: venture capitalists invested $2.5 billion in US next-generation nuclear technology since 2021, compared to near-zero in previous years, while the Trump administration issued executive orders to accelerate reactor deployment. The urgency stems from AI's city-sized power requirements and recognition that America cannot afford to lose what Interior Secretary Doug Burgum calls "the power race" with China. Companies like Standard Nuclear in Oak Ridge, Tennessee are good examples of this push, developing advanced reactor fuel despite employees working months without pay.
Read more...
Meta Inks a New Geothermal Energy Deal To Support AI
Meta has struck a new deal with geothermal startup XGS Energy to supply 150 megawatts of carbon-free electricity for its New Mexico data center. "Advances in AI require continued energy to support infrastructure development," Urvi Parekh, global head of energy at Meta, said in a press release. "With next-generation geothermal technologies like XGS ready for scale, geothermal can be a major player in supporting the advancement of technologies like AI as well as domestic data center development." The Verge reports: Geothermal plants generate electricity using Earth's heat; typically drawing up hot fluids or steam from natural reservoirs to turn turbines. That tactic is limited by natural geography, however, and the US gets around half a percent of its electricity from geothermal sources. Startups including XGS are trying to change that by making geothermal energy more accessible. Last year, Meta made a separate 150MW deal with Sage Geosystems to develop new geothermal power plants. Sage is developing technologies to harness energy from hot, dry rock formations by drilling and pumping water underground, essentially creating artificial reservoirs. Google has its own partnership with another startup called Fervo developing similar technology. XGS Energy is also seeking to exploit geothermal energy from dry rock resources. It tries to set itself apart by reusing water in a closed-loop process designed to prevent water from escaping into cracks in the rock. The water it uses to take advantage of underground heat circulates inside a steel casing. Conserving water is especially crucial in a drought-prone state like New Mexico, where Meta is expanding its Los Lunas data center. Meta declined to say how much it's spending on this deal with XGS Energy. The initiative will roll out in two phases with a goal of being operational by 2030.
Read more...
PCI Express 7.0 Specs Released
The PCI-SIG, which oversees the development of the PCIe specification, has officially released the final spec for PCI Express 7.0. "The PCIe 7.0 specification increases the per-lane data transfer rate to 128 GT/s in each direction, which is twice as fast as PCIe 6.0 supports and four times faster than PCIe 5.0," reports Tom's Hardware. "Such a significant performance increase enables devices with 16 PCIe 7.0 lanes to transfer up to 256 GB/s in each direction, not accounting for protocol overhead. The new version of the interface continues to use PAM4 signaling while maintaining the 1b/1b FLIT encoding method first introduced in PCIe 6.0." From the report: To achieve PCIe 7.0's 128 GT/s record data transfer rate, developers of PCIe 7.0 had to increase the physical signaling rate to 32 GHz or beyond. Keep in mind that both PCIe 5.0 and 6.0 use a physical signaling rate of 16 GHz to enable 32 GT/s using NRZ signaling and 64 GT/s using PAM4 signaling (which allows transfers of two bits per symbol). With PCIe 7.0, developers had to boost the physical frequency for the first time since 2017, which required tremendous work at various levels, as maintaining signal integrity at 32 GHz over long distances using copper wires is extremely challenging. Beyond raw throughput, the update also offers improved power efficiency and stronger support for longer or more complex electrical channels, particularly when using a cabling solution, to cater to the needs of next-generation data center-grade bandwidth-hungry applications, such as 800G Ethernet, Ultra Ethernet, and quantum computing, among others. [...] With the PCIe 7.0 standard officially released, members of the PCI-SIG, including AMD, Intel, and Nvidia, can begin finalizing the development of their platforms that support the PCIe specifications. PCI-SIG plans to start preliminary compliance tests in 2027, with official interoperability tests scheduled for 2028. Therefore, expect actual PCIe 7.0 devices and platforms on the market sometime in 2028 - 2029, if everything goes as planned. PCI-SIG also announced that pathfinding for PCIe 8.0 is underway, and members of the organization are actively exploring possibilities and defining capabilities of a standard that they are going to use in 2030 and beyond. "Interestingly, when asked whether PCIe 8.0 would double data transfer rate to 256 GT/s in each direction (and therefore enable bandwidth of 1 TB/s in both directions using 16 lanes), Al Yanes, president of PCI-SIG, said that while this is an intention, he would not like to make any definitive claims," reports Tom's Hardware. "Additionally, he stated that PCI-SIG is looking forward to enabling PCIe 8.0, which will offer higher performance over copper wires in addition to optical interconnects."
Read more...
Engineer Creates First Custom Motherboard For 1990s PlayStation Console
An anonymous reader quotes a report from Ars Technica: Last week, electronics engineer Lorentio Brodesco announced the completion of a mock-up for nsOne, reportedly the first custom PlayStation 1 motherboard created outside of Sony in the console's 30-year history. The fully functional board accepts original PlayStation 1 chips and fits directly into the original console case, marking a milestone in reverse-engineering for the classic console released in 1994. Brodesco's motherboard isn't an emulator or FPGA-based re-creation -- it's a genuine circuit board designed to work with authentic PlayStation 1 components, including the CPU, GPU, SPU, RAM, oscillators, and voltage regulators. The board represents over a year of reverse-engineering work that began in March 2024 when Brodesco discovered incomplete documentation while repairing a PlayStation 1. "This isn't an emulator. It's not an FPGA. It's not a modern replica," Brodesco wrote in a Reddit post about the project. "It's a real motherboard, compatible with the original PS1 chips." It's a desirable project for some PS1 enthusiasts because a custom motherboard could allow owners of broken consoles to revive their systems by transplanting original chips from damaged boards onto new, functional ones. With original PS1 motherboards becoming increasingly prone to failure after three decades, replacement boards could extend the lifespan of these classic consoles without resorting to emulation. The nsOne project -- short for "Not Sony's One" -- uses a hybrid design based on the PU-23 series motherboards found in SCPH-900X PlayStation models but reintroduces the parallel port that Sony had removed from later revisions. Brodesco upgraded the original two-layer PCB design to a four-layer board while maintaining the same form factor. [...] As Brodesco noted on Kickstarter, his project's goal is to "create comprehensive documentation, design files, and production-ready blueprints for manufacturing fully functional motherboards." Beyond repairs, the documentation and design files Brodesco is creating would preserve the PlayStation 1's hardware architecture for future generations: "It's a tribute to the PS1, to retro hardware, and to the belief that one person really can build the impossible."
Read more...
Anker Recalls Over 1.1 Million Power Banks Due To Fire and Burn Risks
Anker has issued a recall for its PowerCore 10000 power bank (model A1263) due to a "potential issue with the lithium-ion battery" that could pose a fire safety risk. An anonymous reader adds: The company has received 19 reports of fires and explosions that have caused minor burn injuries and resulted in property damage totaling over $60,700, according to the US Consumer Product Safety Commission (USCPSC). The recall covers about 1,158,000 units that were sold online through Amazon, Newegg, and eBay between June 2016 and December 2022. The affected batteries can be identified by the Anker logo engraved on the side with the model number A1263 printed on the bottom edge. However, Anker is only recalling units sold in the US with qualifying serial numbers. To check if yours is included, you'll need to visit Anker's website.
Read more...
World Bank Lifts Ban on Funding Nuclear Energy in Boost To Industry
The World Bank is lifting its decades-long ban on financing nuclear energy, in a policy shift aimed at accelerating development of the low-emissions technology to meet surging electricity demand in the developing world. From a report: In an email to staff on Wednesday, Ajay Banga, the World Bank president, said it would "begin to re-enter the nuclear energy space" [non-paywalled source] in partnership with the International Atomic Energy Agency, the UN nuclear watchdog which works to prevent proliferation of nuclear weapons. "We will support efforts to extend the life ofÂexisting reactors in countries that already have them, and help support grid upgrades andÂrelated infrastructure," the email said. The shift follows advocacy from the pro-nuclear Trump administration and a change of government in Germany, which previously opposed financing atomic energy due to domestic political opposition to the technology. It is part of a wider strategy aimed at tackling an expected doubling of electricity demand in the developing world by 2035. Meeting this demand would require annual investment in generation, grids and storage to rise from $280 billion today to $630 billion, Banga said in the memo seen by the Financial Times.
Read more...
Talen Energy and Amazon Sign Nuclear Power Deal To Fuel Data Centers
Amazon Web Services has signed a long-term deal with Talen Energy to receive up to 1,920 megawatts of carbon-free electricity from the Susquehanna nuclear plant through 2042 to support AWS's AI and cloud operations. The partnership also includes plans to explore new Small Modular Reactors and expand nuclear capacity amid rising U.S. energy demand. Utility Drive reports: Under the PPA, Talen's existing 300-MW co-location arrangement with AWS will shift to a "front of the meter" framework that doesn't require Federal Energy Regulatory Commission approval, according to Houston-based Talen. The company expects the transition will occur next spring after transmission upgrades are finished. FERC in November rejected an amended interconnection service agreement that would have facilitated expanded power sales to a co-located AWS data center at the Susquehanna plant. The agency is considering potential rules for co-located loads in PJM. Talen expects to earn about $18 billion in revenue over the life of the contract at its full quantity, according to an investor presentation. The contract, which runs through 2042, calls for delivering 840 MW to 1,200 MW in 2029 and 1,680 MW to 1,920 MW in 2032. Talen will act as the retail power supplier to AWS, and PPL Electric Utilities will be responsible for transmission and delivery, the company said. Amazon on Monday said it plans to spend about $20 billion building data centers in Pennsylvania. "We are making the largest private sector investment in state history -- $20 billion-- to bring 1,250 high-skilled jobs and economic benefits to the state, while also collaborating with Talen Energy to help power our infrastructure with carbon-free energy," Kevin Miller, AWS vice president of global data centers, said.
Read more...
Scientists Built a Badminton-Playing Robot With AI-Powered Skills
An anonymous reader quotes a report from Ars Technica: The robot built by [Yuntao Ma and his team at ETH Zurich] was called ANYmal and resembled a miniature giraffe that plays badminton by holding a racket in its teeth. It was a quadruped platform developed by ANYbotics, an ETH Zurich spinoff company that mainly builds robots for the oil and gas industries. "It was an industry-grade robot," Ma said. The robot had elastic actuators in its legs, weighed roughly 50 kilograms, and was half a meter wide and under a meter long. On top of the robot, Ma's team fitted an arm with several degrees of freedom produced by another ETH Zurich spinoff called Duatic. This is what would hold and swing a badminton racket. Shuttlecock tracking and sensing the environment were done with a stereoscopic camera. "We've been working to integrate the hardware for five years," Ma said. Along with the hardware, his team was also working on the robot's brain. State-of-the-art robots usually use model-based control optimization, a time-consuming, sophisticated approach that relies on a mathematical model of the robot's dynamics and environment. "In recent years, though, the approach based on reinforcement learning algorithms became more popular," Ma told Ars. "Instead of building advanced models, we simulated the robot in a simulated world and let it learn to move on its own." In ANYmal's case, this simulated world was a badminton court where its digital alter ego was chasing after shuttlecocks with a racket. The training was divided into repeatable units, each of which required that the robot predict the shuttlecock's trajectory and hit it with a racket six times in a row. During this training, like a true sportsman, the robot also got to know its physical limits and to work around them. The idea behind training the control algorithms was to develop visuo-motor skills similar to human badminton players. The robot was supposed to move around the court, anticipating where the shuttlecock might go next and position its whole body, using all available degrees of freedom, for a swing that would mean a good return. This is why balancing perception and movement played such an important role. The training procedure included a perception model based on real camera data, which taught the robot to keep the shuttlecock in its field of view while accounting for the noise and resulting object-tracking errors. Once the training was done, the robot learned to position itself on the court. It figured out that the best strategy after a successful return is to move back to the center and toward the backline, which is something human players do. It even came with a trick where it stood on its hind legs to see the incoming shuttlecock better. It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage -- it was committed, but not suicidal. But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best. The findings have been published in the journal Science Robotics. You can watch a video of the four-legged robot playing badminton on YouTube.
Read more...
Tech Giants' Indirect Operational Emissions Rose 50% Since 2020
An anonymous reader quotes a report from Reuters: Indirect carbon emissions from the operations of four of the leading AI-focused tech companies rose on average by 150% from 2020-2023, due to the demands of power-hungry data centers, a United Nations report (PDF) said on Thursday. The use of artificial intelligence by Amazon, Microsoft, Alphabet and Meta drove up their global indirect emissions because of the vast amounts of energy required to power data centers, the report by the International Telecommunication Union (ITU), the U.N. agency for digital technologies, said. Indirect emissions include those generated by purchased electricity, steam, heating and cooling consumed by a company.Amazon's operational carbon emissions grew the most at 182% in 2023 compared to three years before, followed by Microsoft at 155%, Meta at 145% and Alphabet at 138%, according to the report. The ITU tracked the greenhouse gas emissions of 200 leading digital companies between 2020 and 2023. [...] As investment in AI increases, carbon emissions from the top-emitting AI systems are predicted to reach up to 102.6 million tons of carbon dioxide equivalent per year, the report stated. The data centres that are needed for AI development could also put pressure on existing energy infrastructure. "The rapid growth of artificial intelligence is driving a sharp rise in global electricity demand, with electricity use by data centers increasing four times faster than the overall rise in electricity consumption," the report found. It also highlighted that although a growing number of digital companies had set emissions targets, those ambitions had not yet fully translated into actual reductions of emissions. UPDATE: The headline has been revised to clarify that four leading AI-focused tech companies saw their operational emissions rise to 150% of their 2020 levels by 2023 -- a 50% increase, not a 150% one.
Read more...
FAA To Eliminate Floppy Disks Used In Air Traffic Control Systems
An anonymous reader quotes a report from Tom's Hardware: The head of the Federal Aviation Administration just outlined an ambitious goal to upgrade the U.S.'s air traffic control (ATC) system and bring it into the 21st century. According to NPR, most ATC towers and other facilities today feel like they're stuck in the 20th century, with controllers using paper strips and floppy disks to transfer data, while their computers run Windows 95. While this likely saved them from the disastrous CrowdStrike outage that had a massive global impact, their age is a major risk to the nation's critical infrastructure, with the FAA itself saying that the current state of its hardware is unsustainable. "The whole idea is to replace the system. No more floppy disks or paper strips," acting FAA administrator Chris Rocheleau told the House Appropriations Committee last Wednesday. Transportation Secretary Sean Duffy also said earlier this week," This is the most important infrastructure project that we've had in this country for decades. Everyone agrees -- this is non-partisan. Everyone knows we have to do it." The aviation industry put up a coalition pushing for ATC modernization called Modern Skies, and it even ran an ad telling us that ATC is still using floppy disks and several older technologies to keep our skies safe. [...] Currently, the White House hasn't said what this update will cost. The FAA has already put out a Request For Information to gather data from companies willing to take on the challenge of upgrading the entire system. It also announced several 'Industry Days' so companies can pitch their tech and ideas to the Transportation Department. Duffy said that the Transportation Department aims to complete the project within four years. However, industry experts say this timeline is unrealistic. No matter how long it takes, it's high time that the FAA upgrades the U.S.'s ATC system today after decades of neglect.
Read more...
|