News Source Slashdot:Hardware
Fanless AirJet Cooler Experiment Boosts MacBook Air To Match MacBook Pro's Performance
Anton Shilov reports via Tom's Hardware: Engineers from Frore Systems have integrated the company's innovative solid-state AirJet cooling system, which provides impressive cooling capabilities despite a lack of moving parts, into an M2-based Apple MacBook Air. With proper cooling, the relatively inexpensive laptop matched the performance of a more expensive MacBook Pro based on the same processor. The lack of a fan is probably one of the main advantages of Apple's MacBook Air over its more performant siblings, but it also puts the laptop at a disadvantage. Fanless cooling doesn't have moving parts (which is a plus), but it also cannot properly cool down Apple's M1 or M2 processor under high loads, which is why a 13-inch MacBook Air powered by M1 or M2 system-on-chip is slower than 13-inch MacBook Pro based on the same SoC. However, making a MacBook Air run as fast as a 13-inch MacBook Pro is now possible. A video posted to YouTube by PC World shows how the AirJet system works. They also released a recent demo showing off the strength of the AirJet technology.
Read more...
Amazon Updates Homegrown Chips, Even as It Grows Nvidia Ties
Amazon's cloud-computing unit announced updated versions of its in-house computer chips while also forging closer ties with Nvidia -- dual efforts designed to ensure it can get enough supplies of crucial data-center processors. From a report: New homegrown Graviton4 chips will have as much as 30% better performance than their predecessors, Amazon Web Services said at its annual re:Invent conference in Las Vegas. Computers using the processors will start coming online in the coming months. The company also unveiled Trainium2, an updated version of a processor designed for artificial intelligence systems. It will begin powering new services starting next year, Amazon said. That chip provides an alternative to so-called AI accelerators sold by Nvidia -- processors that have been vital to the build-out of artificial intelligence services. But Amazon also touted "an expansion of its partnership" with Nvidia, whose chief executive officer, Jensen Huang, joined AWS counterpart Adam Selipsky on stage. AWS will be the first big user of an updated version of that company's Grace Hopper Superchip, and it will be one of the data-center companies hosting Nvidia's DGX Cloud service.
Read more...
AI Chip Contenders Face Daunting 'Moats'
Barriers to entry in an industry dominated by TSMC and Nvidia are very high. From a report: In the drama that has just played out in Silicon Valley over the future of OpenAI, one side plot concerned an ambitious chip venture by its chief executive Sam Altman. Before he was ousted and reinstated to the helm of the company, Altman had sought to raise as much as $100bn from investors in the Middle East and SoftBank founder Masayoshi Son to build a rival to compete with sector giants Nvidia and Taiwan Semiconductor Manufacturing Co. This would be a vast undertaking. And one where $100bn may not go very far. Given that the US chip designer and Taiwanese chipmaker are critical to all things generative AI, Altman is unlikely to be the only one with hopes of taking them on. But the barriers to entry -- moats in Silicon Valley parlance -- are formidable. Nvidia has about 95 per cent of the markets for GPU, or graphics processing units. These computer processors were originally designed for graphics but have become increasingly important in areas such as machine learning. TSMC has about 90 per cent of the world's advanced chip market. These businesses are lucrative. TSMC runs on gross margins of nearly 60 per cent, Nvidia at 74 per cent. TSMC makes $76bn in sales a year. The impressive figures make it seem as though there is plenty of room for more contenders. A global shortage of Nvidia's AI chips makes the prospect of vertical integration yet more attractive. As the number of GPUs required to develop and train advanced AI models grows rapidly, the key to profitability for AI companies lies in having stable access to GPUs. [...] It is one thing for companies to design customised chips. But Nvidia's profitability comes not from making chips cost-efficient, but by providing a one-stop solution for a wide range of tasks and industries. For example, Nvidia's HGX H100 systems, which can go for about $300,000 each, are used to accelerate workloads for everything from financial applications to analytics. Coming up with a viable rival for the HGX H100 system, which is made up of 35,000 parts, would take much more than just designing a new chip. Nvidia has been developing GPUs for more than two decades. That head start, which includes hardware and related software libraries, is protected by thousands of patents. Even setting aside the challenges of designing a new AI chip, manufacturing is where the real challenge lies.
Read more...
AWS Repurposes Fire TV Cubes Into $195 Thin Clients For Cloud Desktops
Simon Sharwood reports via The Register: Amazon Web Services has announced the WorkSpaces Thin Client -- a device dedicated to connecting to its WorkSpaces desktop-as-a service offering and based on Amazon's own "Fire Cube" smart TV box. The $195 machine has the same hardware as the Fire Cube: the eight-core Arm-powered Amlogic POP1-G SoC, plus 2GB of LPDDR4 RAM, 10/100 ethernet, and a single USB-A 2.0 port. Bluetooth is included to connect other peripherals. A second HDMI output can be added by acquiring an $85 hub that also offers four more USB ports. Like the Fire TV Cube, the Thin Client also runs a modified cut of Android. But there the similarities end. AWS created custom firmware and ripped out anything remotely related to running a consumer device, replacing it with software designed solely to create a secure connection between the device and desktops running in the Amazonian cloud. Amazon Business -- the B2B version of Jeff Bezos's digital souk -- will ship the device to your door, and charge it to your AWS bill. At least if you are in the USA. Europe will get the Thin Client in early 2024, and it'll eventually migrate elsewhere. AWS decided to base the box on the Fire Cube because, according to a corporate blog post, AWS customers expressed a desire for cheaper and easier-to-maintain client devices. As AWS execs searched for a well-priced box, they considered the Fire TV Cube, found it fit the bill and noted it was already being made at scale. Keeping things in-house made sense, too. And so we find ourselves with AWS taking on established thin client providers. The cloudy concern is also keen to have a crack at the thick wedge of the enterprise PC market: call centers, payment processing centers, and other environments with lots of users and high staff turnover due to factors like seasonal demand for workers.
Read more...
Google Drive Misplaces Months' Worth of Customer Files
Google Drive users are reporting files mysteriously disappearing from the service, with some posters on the company's support forums claiming six or more months of work have unceremoniously vanished. From a report: The issue has been rumbling for a few days, with one user logging into Google Drive and finding things as they were in May 2023. According to the poster, almost everything saved since then has gone, and attempts at recovery failed. Others chimed in with similar experiences, and one claimed that six months of business data had gone AWOL. There is little information regarding what has happened; some users reported that synchronization had simply stopped working, so the cloud storage was out of date. Others could get some of their information back by fiddling with cached files, although the limited advice on offer for the affected was to leave things well alone until engineers come up with a solution. A message purporting to be from Google support also advised not to make changes to the root/data folder while engineers investigate the issue. Some users speculated that it might be related to accounts being spontaneously dropped. We've asked Google for its thoughts and will update should the search giant respond.
Read more...
Could Airports Make Hydrogen Work As Fuel?
"On a typical day 1,300 planes take off and land at Heathrow Airport, and keeping that going requires around 20 million litres of jet fuel every day," reports the BBC. "That's the equivalent of filling up your car around 400,000 times. "But, when it comes to fuel, airports around the world are having to have a major rethink..."To be of any use to the aviation industry, hydrogen needs to be in its liquid form, which involves chilling it to minus 253C. Handling a liquid at that kind of temperature is immensely challenging. Given the chance, liquid hydrogen will "boil-off" and escape as a gas — potentially becoming a hazard. So tanks, pipes and hoses all have to be extra-insulated to keep the liquid cold. France's Air Liquide has a lot of experience in this area. For around 50 years it has been supplying cryogenic hydrogen to the Ariane rockets of the European Space Agency (ESA)... Over the past three years, in partnership with Airbus and France's biggest airport operator, Group ADP, Air Liquide has been investigating the potential of hydrogen in the aviation business. It is also part of the H2Fly consortium which this summer successfully flew an aircraft using liquid hydrogen. For Air Liquide, it was an opportunity to test systems for fuelling a hydrogen aircraft... However, installing the equipment needed to store and distribute hydrogen at airports will not be cheap. The consultancy Bain & Company estimates it could cost as much as a billion dollars per airport. One start-up, Universal Hydrogen, says it has a solution... The company has developed special tanks to hold liquid hydrogen (UH calls them modules), which can then be trucked to the airport. The modules are designed to slot straight into the aircraft, where they can be plugged into the propulsion system. No need for pipes, hoses and pumps. The modules are extremely well insulated and can keep the hydrogen in its liquid form for four days. Two modules would hold 360kg of hydrogen and would be able to fly an aircraft 500 miles, plus an extra 45 minutes of flight time in reserve.
Read more...
US Energy Department Funds Next-Gen Semiconductor Projects to Improve Power Grids
America's long-standing Advanced Research Projects Agency (or ARPA) developing the foundational technologies for the internet. This week its energy division announced $42 million for projects enabling a "more secure and reliable" energy grid, "allowing it to utilize more solar, wind, and other clean energy." But specifically, they funded 15 projects across 11 states to improve the reliability, resiliency, and flexibility of the grid "through the next-generation semiconductor technologies."Streamlining the coordinated operation of electricity supply and demand will improve operational efficiency, prevent unforeseen outages, allow faster recovery, minimize the impacts of natural disasters and climate-change fueled extreme weather events, and redcude grid operating costs and carbon intensity. Some highlights:The Georgia Institute of Technology will develop a novel semiconductor switching device to improve grid control, resilience, and reliability.Michigan's Great Lakes Crystal Technologies (will develop a diamond semiconductor transistor to support the control infrastructure needed for an energy grid with more distributed generation sources and more variable loadsLawrence Livermore National Laboratory will develop an optically-controlled semiconductor transistor to enable future grid control systems to accommodate higher voltage and current than state-of-the-art devices.California's Opcondys will develop a light-controlled grid protection device to suppress destructive, sudden transient surges on the grid caused by lightning or electromagnetic pulses.Albuquerque's Sandia National Laboratories will develop novel a solid-state surge arrester protecting the grid from very fast electromagnetic pulses that threaten grid reliability and performance.America's Secretary of Energy said the new investment "will support project teams across the country as they develop the innovative technologies we need to strengthen our grid security and bring reliable clean electricity to more families and businesses — all while combatting the climate crisis."
Read more...
Continuing Commitment to Open Access, CERN Launches New Open Source Program Office
"The cornerstone of the open-source philosophy is that the recipients of technology should have access to all its building blocks..." writes the European Organization for Nuclear Research, "in order to study it, modify it and redistribute it to others." This includes mechanical designs, schematics for electronics, and software code.Ever since releasing the World Wide Web software under an open-source model in 1994, CERN has continuously been a pioneer in this field, supporting open-source hardware (with the CERN Open Hardware Licence), open access (with the Sponsoring Consortium for Open Access Publishing in Particle Physics — SCOAP3) and open data (with the Open Data Portal for the LHC experiments). The CERN Open Data portal is a testimony to CERN's policy of Open Access and Open Data. The portal allows the LHC experiments to share their data with a double focus: for the scientific community, including researchers outside the CERN experimental teams, as well as citizen scientists, and for the purposes of training and education through specially curated resources. The first papers based on data from the CERN Open Data portal have been published. Several CERN technologies are being developed with open access in mind. Invenio is an open-source library management package, now benefiting from international contributions from collaborating institutes, typically used for digital libraries. Indico is another open-source tool developed at CERN for conference and event management and used by more than 200 sites worldwide, including the United Nations. INSPIRE, the High Energy Physics information system, is another example of open source software developed by CERN together with DESY, Fermilab and SLAC. And on Wednesday the European Organization for Nuclear Research launches its new Open Source Program Office "to help you with all issues relating to the release of your software and hardware designs."Sharing your work with collaborators in research and industry has many advantages, but it may also present some questions and challenges... The OSPO will support you, whether you are a member of the personnel or a user, to find the best solution by giving you access to a set of best practices, tools and recommendations. With representatives from all sectors at CERN, it brings together a broad range of expertise on open source practices... As well as supporting the CERN internal community, the OSPO will engage with external partners to strengthen CERN's role as a promoter of open source. Open source is a key pillar of open science. By promoting open source practices, the OSPO thus seeks to address one of CERN's core ambitions: sharing our knowledge with the world. Ultimately, the aim is to increase the reach of open source projects from CERN to maximise their benefits for the scientific community, industry and society at large. For Wednesday's launch event "We will host distinguished open source experts and advocates from Nvidia, the World Health Organization and the Open Source Hardware Association to discuss the impact and future of open source." There will be a live webcast of the event.
Read more...
America's Bowling Pins Face a Revolutionary New Technology: Strings
There's yet another technological revolution happening, reports the Los Angeles Times. Bowling alleys across America "are ditching traditional pinsetters — the machines that sweep away and reset pins — in favor of contraptions that employ string. "Think of the pins as marionettes with nylon cords attached to their heads. Those that fall are lifted out of the way, as if by levitation, then lowered back into place after each frame... European bowling alleys have used string pinsetters for decades because they require less energy and maintenance. "All you need is someone at the front counter to run back when the strings tangle."String pinsetters mean big savings, maybe salvation, for an industry losing customers to video games and other newfangled entertainment. That is why the U.S. Bowling Congress recently certified them for tournaments and league play. But there is delicate science at play here. Radius of gyration, coefficient of restitution and other obscure forces cause tethered pins to fly around differently than their free-fall counterparts. They don't even make the same noise. Faced with growing pushback, the bowling congress published new research this month claiming the disparity isn't nearly as great as people think. Using a giant mechanical arm, powered by hydraulics and air pressure, they rolled "thousands of test balls from every angle, with various speeds and spins, on string-equipped lanes," according to the article:They found a configuration that resulted in 7.1% fewer strikes and about 10 pins fewer per game as compared to bowling with traditional pinsetters... Officials subsequently enlisted 500 human bowlers for more testing and, this time, reported finding "no statistically significant difference." But hundreds of test participants commented that bowling on strings felt "off." The pins seemed less active, they said. There were occasional spares whereby one pin toppled another without making contact, simply by crossing strings. Nothing could be done about the muted sound. It's like hearing a drum roll — the ball charging down the lane — with no crashing cymbal at the end. Still, one Northern California bowling alley spent $1 million to install the technology, and believes it will save them money — partly by cutting their electric bill in half. "We had a full-time mechanic and were spending up to $3,000 a month on parts." The article also remembers that once upon a time, bowling alleys reset their pins using pinboys, "actual humans — mostly teenagers... scrambling around behind the lanes, gathering and resetting by hand," before they were replaced by machines after World War II.
Read more...
In Just 15 Months, America Made $37B In Clean Energy Investments In Fossil Fuel-Reliant Regions
America passed a climate bill in August of 2022 with incentives to build wind and solar energy in regions that historically relied on fossil fuels. And sure enough, since then "a disproportionate amount of wind, solar, battery and manufacturing investment is going to areas that used to host fossil fuel plants," reports the Washington Post. They cite a new analysis of investment trends from independent research firm Rhodium Group and MIT's Center for Energy and Environmental Policy Research:In Carbon County, Wyo. — a county named for its coal deposits — a power company is building hundreds of wind turbines. In Mingo County, W.Va., where many small towns were once coal towns, the Adams Fork Energy plant will sit on a former coal mining site and produce low-carbon ammonia... While communities that once hosted coal, oil or gas infrastructure make up only 18.6 percent of the population, they received 36.8 percent of the clean energy investment in the year after the Inflation Reduction Act's passage. "We're talking about in total $100 billion in investment in these categories," said Trevor Houser, a partner at Rhodium Group. "So $37 billion investment in a year for energy communities — that's a lot of money...." Most significantly, 56.6 percent of investment in U.S. wind power in the past year has gone to energy communities, as well as 45.5 percent of the storage and battery investment... The analysis also found that significant amounts of clean energy investment were going to disadvantaged communities, defined as communities with environmental or climate burdens, and low-income communities. Many of the states benefiting are solidly Republican... Josh Freed, senior vice president for climate and energy at the center-left think tank Third Way, is not sure whether the clean energy investments will make a difference for next year's election. But in the long term, he argues, rural Republican areas will become more dependent on clean energy — potentially shifting party alliances and shifting the position of the Republican Party itself. "It's going to change these fossil fuel communities," he said.
Read more...
China's Secretive Sunway Pro CPU Quadruples Performance Over Its Predecessor
An anonymous reader shares a report: Earlier this year, the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) launched its new supercomputer based on the enhanced China-designed Sunway SW26010 Pro processors with 384 cores. Sunway's SW26010 Pro CPU not only packs more cores than its non-Pro SW26010 predecessor, but it more than quadrupled FP64 compute throughput due to microarchitectural and system architecture improvements, according to Chips and Cheese. However, while the manycore CPU is good on paper, it has several performance bottlenecks. The first details of the manycore Sunway SW26010 Pro CPU and supercomputers that use it emerged back in 2021. Now, the company has showcased actual processors and disclosed more details about their architecture and design, which represent a significant leap in performance, recently at SC23. The new CPU is expected to enable China to build high-performance supercomputers based entirely on domestically developed processors. Each Sunway SW26010 Pro has a maximum FP64 throughput of 13.8 TFLOPS, which is massive. For comparison, AMD's 96-core EPYC 9654 has a peak FP64 performance of around 5.4 TFLOPS. The SW26010 Pro is an evolution of the original SW26010, so it maintains the foundational architecture of its predecessor but introduces several key enhancements. The new SW26010 Pro processor is based on an all-new proprietary 64-bit RISC architecture and packs six core groups (CG) and a protocol processing unit (PPU). Each CG integrates 64 2-wide compute processing elements (CPEs) featuring a 512-bit vector engine as well as 256 KB of fast local store (scratchpad cache) for data and 16 KB for instructions; one management processing element (MPE), which is a superscalar out-of-order core with a vector engine, 32 KB/32 KB L1 instruction/data cache, 256 KB L2 cache; and a 128-bit DDR4-3200 memory interface.
Read more...
NYC Will Soon Be Home To 15 Robot-Run Vegetarian Restaurants From Chipotle's Founder
The founder of Chipotle is opening a new endeavor called Kernel, a vegetarian fast-casual restaurant that will be operated mostly by robots. Steve Ells is opening at least 15 locations of Kernel, the first by early 2024; the remainder are on track for NYC in the next two years, a spokesperson confirms. From a report: Kernel will serve vegetarian sandwiches, salads, and sides, made in a space that's around 1,000 square-feet or smaller. Each location would employ three workers, the Wall Street Journal reported, "rather than the dozen that many fast-casual eateries have working." The menu pricing will be on par with Chipotle's, and, Ells says, the company will pay more and offer better benefits for actual humans working than other chains. As you'd expect from the former CEO of Chipotle -- which had at least five foodborne illness outbreaks between 2015 and 2018, costing the company $25 million per the Justice Department -- "the new system's design helps better ensure food safety," Ells told the Journal. It has taken $10 million in his personal funds to start Kernel, along with $36 million from investors. The company suggests customers may not want much interaction with other people -- and neither do CEOs. "We've taken a lot of human interaction out of the process and left just enough," he told the Journal. Yet in a 2022 study on the future of dining out conducted by commerce site, PYMNTS, of 2,500 people surveyed, 63 percent of diners believe restaurants are becoming increasingly understaffed, and 39 percent said that they are becoming less personal.
Read more...
Giant Batteries Drain Economics of Gas Power Plants
Batteries used to store power produced by renewables are becoming cheap enough to make developers abandon scores of projects for gas-fired generation worldwide. Reuters reports: The long-term economics of gas-fired plants, used in Europe and some parts of the United States primarily to compensate for the intermittent nature of wind and solar power, are changing quickly, according to Reuters' interviews with more than a dozen power plant developers, project finance bankers, analysts and consultants. They said some battery operators are already supplying back-up power to grids at a price competitive with gas power plants, meaning gas will be used less. The shift challenges assumptions about long-term gas demand and could mean natural gas has a smaller role in the energy transition than posited by the biggest, listed energy majors. In the first half of the year, 68 gas power plant projects were put on hold or cancelled globally, according to data provided exclusively to Reuters by U.S.-based non-profit Global Energy Monitor. [...] "In the early 1990s, we were running gas plants baseload, now they are shifting to probably 40% of the time and that's going to drop off to 11%-15% in the next eight to 10 years," Keith Clarke, chief executive at Carlton Power, told Reuters. Developers can no longer use financial modelling that assumes gas power plants are used constantly throughout their 20-year-plus lifetime, analysts said. Instead, modellers need to predict how much gas generation is needed during times of peak demand and to compensate for the intermittency of renewable sources that are hard to anticipate. The cost of lithium-ion batteries has more than halved from 2016 to 2022 to $151 per kilowatt hour of battery storage, according to BloombergNEF. At the same time, renewable generation has reached record levels. Wind and solar powered 22% of the EU's electricity last year, almost doubling their share from 2016, and surpassing the share of gas generation for the first time, according to think tank Ember's European Electricity Review. "In the early years, capacity markets were dominated by fossil fuel power stations providing the flexible electricity supply," said Simon Virley, head of energy at KPMG. Now batteries, interconnectors and consumers shifting their electricity use are also providing that flexibility, Virley added.
Read more...
Apple Plans To Equip MacBooks With In-House Cellular Modems
According to Bloomberg's Mark Gurman, Apple plans to ditch Qualcomm and build its own custom modem that could launch around 2026. MacRumors reports: Writing in his latest Power On newsletter, Gurman says that Apple's custom technology aspirations include integrating an in-house modem into its system-on-a-chip (SoC), which would eventually see the launch of MacBooks with built-in cellular connectivity. Gurman says Apple will "probably need two or three additional years to get that chip inside cellular versions of the Apple Watch and iPad -- and the Mac, once the part is integrated into the company's system-on-a-chip." Apple has explored the possibility of developing MacBooks with cellular connectivity in the past. Indeed, the company reportedly considered launching a MacBook Air with 3G connectivity, but former CEO Steve Jobs said in 2008 that Apple decided against it, since it would take up too much room in the case. An integrated SoC would solve that problem. Gurman's latest newsletter also said some of Apple's other ongoing in-house chip projects include camera sensors, batteries, a combined Wi-Fi and Bluetooth chip that will eventually replace parts from Broadcom, Micro-LED displays for Apple devices, and a non-invasive glucose monitoring system.
Read more...
US Autoworkers End Strike with Pay Raises and a Chance to Unionize EV Battery Plants
There's been predictions that a transition to electric vehicles would hurt autoworkers. But this week U.S. autoworkers ended their strike after winning "significant gains in pay and benefits," reports the Associated Press:The United Auto Workers union overwhelmingly ratified new contracts with Ford and Stellantis, that along with a similar deal with General Motors will raise pay across the industry, force automakers to absorb higher costs and help reshape the auto business as it shifts away from gasoline-fueled vehicles... The companies agreed to dramatically raise pay for top-scale assembly plant workers, with increases and cost-of-living adjustments that would translate into 33% wage gains. Top assembly plant workers are to receive immediate 11% raises and will earn roughly $42 an hour when the contracts expire in April of 2028. Under the agreements, the automakers also ended many of the multiple tiers of wages they had used to pay different workers. They also agreed in principle to bring new electric-vehicle battery plants into the national union contract. This provision will give the UAW an opportunity to unionize the EV battery plants plants, which will represent a rising share of industry jobs in the years ahead. In October the union's president criticized what had been the original trajectory of the auto industry. "The plan was to draw down engine and transmission plants, and permanently replace them with low-wage battery jobs. We had a different plan. And our plan is winning." And this week the union's president said they had not only "raised wages dramatically for over a hundred thousand workers" — and improved their retirement security. "We took a major step towards ensuring a just transition to electric vehicles." In Belvidere, Illinois, the union "won a commitment from Stellantis to reopen a shuttered factory and even add an EV battery plant," the Associated Press notes. "The new contract agreements were widely seen as a victory for the UAW," their article adds — and perhaps even for other autoworkers. After the UAW's president announced plans to try unionizing other plants, three foreign automakers in the U.S. — Honda, Toyota and Hyundai — "quickly responded to the UAW contract by raising wages for their factory workers."
Read more...
|