News
News Archive
About
Forums
Contact
Submit Items

Reviews
Articles

 
DNS Propagation
DNS Report
Port Scanner
ViewDNS.info
DNS Record Lookup

News Source Slashdot:Hardware

G7 Nations Promise Decarbonization, 870 Million Covid-19 Vaccines
Slashdot reader Charlotte Web writes: The "Group of Seven" (or G7) nations are some of the world's largest economies — the U.S. and Canada, the U.K., France, Germany, and Italy, and Japan. On Sunday they pledged $2 billion to help developing countries pivot away from fossil fuels and pledged an "overwhelmingly decarbonized" electricity sector by 2030. The New York Times calls these "major steps in what leaders hope will be a global transition to wind, solar and other energy that does not produce planet-warming carbon dioxide emissions." Politico's Ryan Heath argues "The language on a 'green revolution' is quite strong — there's plenty of detail missing, but it gives climate campaigners a lot to hit leaders with if they fail to deliver. And it's a big deal for the G-7 to agree to 'to conserve or protect at least 30 percent of our land and oceans by 2030.'" Other reports from Politico's writers: "Boris Johnson admitted that the world's richest economies had not managed to secure a widely advertised 1 billion vaccine doses to send to developing countries. The final communique says the group will deliver 870 million doses over the next year." "The G-7 nations called for a 'timely, transparent, expert-led, and science-based WHO-convened' investigation into the origins of Covid-19, including in China. WHO's first crack at an investigation — released in March — called a lab leak 'extremely unlikely,' but China didn't grant access to key documents and Secretary of State Antony Blinken called that investigation 'highly deficient' this morning. The U.S. government remains split between two origin theories."

Read more...

Are Transcontinental, Submarine Supergrids the Future of Energy?
Bloomberg Businessweek reports on "renewed interest in cables that can power consumers in one country with electricity generated hundreds, even thousands, of miles away in another" and possibly even transcontinental, submarine electricity superhighways:Coal, gas and even nuclear plants can be built close to the markets they serve, but the utility-scale solar and wind farms many believe essential to meet climate targets often can't. They need to be put wherever the wind and sun are strongest, which can be hundreds or thousands of miles from urban centers. Long cables can also connect peak afternoon solar power in one time zone to peak evening demand in another, reducing the price volatility caused by mismatches in supply and demand as well as the need for fossil-fueled back up capacity when the sun or wind fade. As countries phase out carbon to meet climate goals, they'll have to spend at least $14 trillion to strengthen grids by 2050, according to Bloomberg New Energy Finance. That's only a little shy of projected spending on new renewable generation capacity and it's increasingly clear that high- and ultra-high-voltage direct current lines will play a part in the transition. The question is how international will they be...? The article points out that in theory, Mongolia's Gobi desert "has potential to deliver 2.6 terawatts of wind and solar power — more than double the U.S.'s entire installed power generation capacity — to a group of Asian powerhouse economies that together produce well over a third of global carbon emissions..." The same goes for the U.S., where with the right infrastructure, New York could tap into sun- and wind-rich resources from the South and Midwest. An even more ambitious vision would access power from as far afield as Canada or Chile's Atacama Desert, which has the world's highest known levels of solar power potential per square meter. Jeremy Rifkin, a U.S. economist who has become the go-to figure for countries looking to remake their infrastructure for the digital and renewable future, sees potential for a single, 1.1 billion-person electricity market in the Americas that would be almost as big as China's. Rifkin has advised Germany and the EU, as well as China... Persuading countries to rely on each other to keep the lights on is tough, but the universal, yet intermittent nature of solar and wind energy also makes it inevitable, according to Rifkin. "This isn't the geopolitics of fossil fuels," owned by some and bought by others, he says. "It is biosphere politics, based on geography. Wind and sun force sharing...." If these supergrids don't get built, it will be because their time has both come and gone. Not only are they expensive, politically difficult, and unpopular — they have to cross a lot of backyards — their focus on mega-power installations seems outdated to some. Distributed microgeneration as close to home as your rooftop, battery storage, and transportable hydrogen all offer competing solutions to the delivery problems supergrids aim to solve.

Read more...

Potential Sites For UK's First Prototype Fusion Power Plant Identified
A total of 15 potential sites are in the running to host the UK's first prototype fusion power plant. The BBC reports: Fusion is seen as a potential source of almost limitless clean energy but is currently only used in experiments. An open call for sites was made last year and nominations closed at the end of March this year. Following checks for compliance with key entry criteria the UK Atomic Energy Agency (UKAEA) has published a long list of possible locations. The sites, from north to south, with nominating body, are: Dounreay, East Airdrie, Poneil, Ardeer, Chapelcross, Moorside, Bay Fusion, Goole, West Burton, Ratcliffe on Soar, Pembroke, Severn Edge, Aberthaw, Bridgwater Bay, and Bradwell (Essex). The UKAEA said that acceptance of the sites did not indicate that they were "preferred or desired" or that it believed they were "in all cases, possible." It stressed it was simply that the procedural entry criteria had been met and assessment had now begun. It said a shortlisting process would take place in the autumn with a final site decision likely by the end of next year. UKAEA is hoping to have such a plant operating in the early 2040s, with an initial concept design ready by 2024."

Read more...

Google Used Reinforcement Learning To Design Next-Gen AI Accelerator Chips
Chip floorplanning is the engineering task of designing the physical layout of a computer chip. In a paper published in the journal Nature, Google researchers applied a deep reinforcement learning approach to chip floorplanning, creating a new technique that "automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area." VentureBeat reports: The Google team's solution is a reinforcement learning method capable of generalizing across chips, meaning that it can learn from experience to become both better and faster at placing new chips. Training AI-driven design systems that generalize across chips is challenging because it requires learning to optimize the placement of all possible chip netlists (graphs of circuit components like memory components and standard cells including logic gates) onto all possible canvases. [...] The researchers' system aims to place a "netlist" graph of logic gates, memory, and more onto a chip canvas, such that the design optimizes power, performance, and area (PPA) while adhering to constraints on placement density and routing congestion. The graphs range in size from millions to billions of nodes grouped in thousands of clusters, and typically, evaluating the target metrics takes from hours to over a day. Starting with an empty chip, the Google team's system places components sequentially until it completes the netlist. To guide the system in selecting which components to place first, components are sorted by descending size; placing larger components first reduces the chance there's no feasible placement for it later. Training the system required creating a dataset of 10,000 chip placements, where the input is the state associated with the given placement and the label is the reward for the placement (i.e., wirelength and congestion). The researchers built it by first picking five different chip netlists, to which an AI algorithm was applied to create 2,000 diverse placements for each netlist. The system took 48 hours to "pre-train" on an Nvidia Volta graphics card and 10 CPUs, each with 2GB of RAM. Fine-tuning initially took up to 6 hours, but applying the pre-trained system to a new netlist without fine-tuning generated placement in less than a second on a single GPU in later benchmarks. In one test, the Google researchers compared their system's recommendations with a manual baseline: the production design of a previous-generation TPU chip created by Google's TPU physical design team. Both the system and the human experts consistently generated viable placements that met timing and congestion requirements, but the AI system also outperformed or matched manual placements in area, power, and wirelength while taking far less time to meet design criteria.

Read more...

Ultra-High-Density HDDs Made With Graphene Store Ten Times More Data
Graphene can be used for ultra-high density hard disk drives (HDD), with up to a tenfold jump compared to current technologies, researchers at the Cambridge Graphene Center have shown. Phys.Org reports: The study, published in Nature Communications, was carried out in collaboration with teams at the University of Exeter, India, Switzerland, Singapore, and the US. [...] HDDs contain two major components: platters and a head. Data are written on the platters using a magnetic head, which moves rapidly above them as they spin. The space between head and platter is continually decreasing to enable higher densities. Currently, carbon-based overcoats (COCs) -- layers used to protect platters from mechanical damages and corrosion -- occupy a significant part of this spacing. The data density of HDDs has quadrupled since 1990, and the COC thickness has reduced from 12.5nm to around 3nm, which corresponds to one terabyte per square inch. Now, graphene has enabled researchers to multiply this by ten. The Cambridge researchers have replaced commercial COCs with one to four layers of graphene, and tested friction, wear, corrosion, thermal stability, and lubricant compatibility. Beyond its unbeatable thinness, graphene fulfills all the ideal properties of an HDD overcoat in terms of corrosion protection, low friction, wear resistance, hardness, lubricant compatibility, and surface smoothness. Graphene enables two-fold reduction in friction and provides better corrosion and wear than state-of-the-art solutions. In fact, one single graphene layer reduces corrosion by 2.5 times. Cambridge scientists transferred graphene onto hard disks made of iron-platinum as the magnetic recording layer, and tested Heat-Assisted Magnetic Recording (HAMR) -- a new technology that enables an increase in storage density by heating the recording layer to high temperatures. Current COCs do not perform at these high temperatures, but graphene does. Thus, graphene, coupled with HAMR, can outperform current HDDs, providing an unprecedented data density, higher than 10 terabytes per square inch.

Read more...

McDonald's Starts Testing Automated Drive-Thru Ordering
New submitter DaveV1.0 shares a report from CNBC: At 10 McDonald's locations in Chicago, workers aren't taking down customers' drive-thru orders for McNuggets and french fries -- a computer is, CEO Chris Kempczinski said Wednesday. Kempczinski said the restaurants using the voice-ordering technology are seeing about 85% order accuracy. Only about a fifth of orders need to be a taken by a human at those locations, he said, speaking at Alliance Bernstein's Strategic Decisions conference. In 2019, under former CEO Steve Easterbrook, McDonald's went on a spending spree, snapping up restaurant tech. One of those acquisitions was Apprente, which uses artificial intelligence software to take drive-thru orders. Kempczinski said the technology will likely take more than one or two years to implement. "Now there's a big leap from going to 10 restaurants in Chicago to 14,000 restaurants across the U.S., with an infinite number of promo permutations, menu permutations, dialect permutations, weather — and on and on and on," he said. Another challenge has been training restaurant workers to stop themselves from jumping in to help.

Read more...

US PC Shipments Soar 73% In the First Quarter As Apple Falls From Top Spot
An anonymous reader quotes a report from TechCrunch: With increased demand from the pandemic, Canalys reports that U.S. PC shipments were up 73% over the same period last year. That added up to a total of 34 million units sold. While Apple had a good quarter with sales up 36%, it was surpassed by HP, which sold 11 million units in total with annual growth up an astonishing 122.6%. As Canalys pointed out, the first quarter tends to be a weaker one for Apple hardware following the holiday season, but it's a big move for HP nonetheless. Other companies boasting big growth numbers include Samsung at 116% and Lenovo at 92.8%. Dell was up 29.2%, fairly modest compared with the rest of the group. Overall though it was a stunning quarter as units flew off the shelves. Canalys Research Analyst Brian Lynch says some of this can be attributed to the increased demand from 2020 as people moved to work and school from home and needed new machines to get their work done, but regardless the growth was unrivaled historically. " Q1 2021 still rates as one of the best first quarters the industry has ever seen. Vendors have prioritized fulfilling U.S. backlogs before supply issues are addressed in other parts of the world," Lynch said in a statement. Perhaps not surprisingly, low-cost Chromebooks were the most popular item as people looking to refresh their devices, especially for education purposes, turned to the lower end of the PC market, which likely had a negative impact on higher-priced Apple products, as well contributing to its drop from the top spot. According to Canalys, Chromebook sales were up a whopping 548% with Samsung leading that growth with an astonishing 1,963% growth rate. "Asus, HP and Lenovo all reported Chromebook sales rates up over 900%," adds TechCrunch.

Read more...

Reducing Poverty Can Actually Lower Energy Demand, Finds Research
An anonymous reader shares a report from The Conversation: As people around the world escape poverty, you might expect their energy use to increase. But my research in Nepal, Vietnam, and Zambia found the opposite: lower levels of deprivation were linked to lower levels of energy demand. What is behind this counterintuitive finding? [...] We found that households that do have access to clean fuels, safe water, basic education and adequate food -- that is, those not in extreme poverty -- can use as little as half the energy of the national average in their country. This is important, as it goes directly against the argument that more resources and energy will be needed for people in the global south to escape extreme poverty. The biggest factor is the switch from traditional cooking fuels, like firewood or charcoal, to more efficient (and less polluting) electricity and gas. In Zambia, Nepal, and Vietnam, modern energy resources are extremely unfairly distributed -- more so than income, general spending, or even spending on leisure. As a consequence, poorer households use more dirty energy than richer households, with ensuing health and gender impacts. Cooking with inefficient fuels consumes a lot of energy, and even more when water needs to be boiled before drinking. But do households with higher incomes and more devices have a better chance of escaping poverty? Some do, but having higher incomes and mobile phones are not either prerequisites or guarantees of having basic needs satisfied. Richer households without access to electricity or sanitation are not spared from having malnourished children or health problems from using charcoal. Ironically, for most households, it is easier to obtain a mobile phone than a clean, nonpolluting fuel for cooking. Therefore, measuring progress via household income leads to an incomplete understanding of poverty and its deprivations. So what? Are we arguing against the global south using more energy for development? No: instead of focusing on how much energy is used, we are pointing to the importance of collective services (like electricity, indoor sanitation and public transport) for alleviating the multiple deprivations of poverty. In addressing these issues we cannot shy away from asking why so many countries in the global south have such a low capacity to invest in those services. It has to do with the fact that poverty does not just happen: it is created via interlinked systems of wealth extraction such as structural adjustment, or high costs of servicing national debts. Given that climate change is caused by the energy use of a rich minority in the global north but the consequences are borne by the majority in the poorer global south, human development is not only a matter of economic justice but also climate justice. Investing in vital collective services underpins both.

Read more...

Sidewalk Robots are Now Delivering Food in Miami
18-inch tall robots on four wheels zipping across city sidewalks "stopped people in their tracks as they whipped out their camera phones," reports the Florida Sun-Sentinel. "The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in the way food comes to our doors and our tables."The semiautonomous vehicles were engineered by Kiwibot, a company started in 2017 to game-change the food delivery landscape... In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program funded by the Knight Foundation. The program is driven to understand how residents and consumers will interact with this type of technology, especially as the trend of robot servers grows around the country. And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and several others, in the next couple of weeks... "Our program is completely focused on the residents of Miami-Dade County and the way they interact with this new technology. Whether it's interacting directly or just sharing the space with the delivery bots," said Carlos Cruz-Casas, with the county's Department of Transportation... Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are placed on the front, back and sides of the vehicle, which the supervisors can view on a computer screen. [A spokesperson says later in the article "there is always a remote and in-field team looking for the robot."] If crossing the street is necessary, the robot will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to allow deliveries up to a mile and a half away so robots can make it to their destinations in 30 minutes or less. Earlier Kiwi tested its sidewalk-travelling robots around the University of California at Berkeley, where at least one of its robots burst into flames. But the Sun-Sentinel reports that "In about six months, at least 16 restaurants came on board making nearly 70,000 deliveries... "Kiwibot now offers their robotic delivery services in other markets such as Los Angeles and Santa Monica by working with the Shopify app to connect businesses that want to employ their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is making it possible for Miami-Dade County restaurants to sign on for free." A video shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog on a leash, and at least one potential restaurant customer looking forward to no longer having to tip human food-delivery workers.

Read more...

RISC Vs. CISC Is the Wrong Lens For Comparing Modern x86, ARM CPUs
Long-time Slashdot reader Dputiger writes: Go looking for the difference between x86 and ARM CPUs, and you'll run into the idea of CISC versus RISC immediately. But 40 years after the publication of David Patterson and David Ditzel's 1981 paper, "The Case for a Reduced Instruction Set Computer," CISC and RISC are poor top-level categories for comparing these two CPU families. ExtremeTech writes:The problem with using RISC versus CISC as a lens for comparing modern x86 versus ARM CPUs is that it takes three specific attributes that matter to the x86 versus ARM comparison — process node, microarchitecture, and ISA — crushes them down to one, and then declares ARM superior on the basis of ISA alone. The ISA-centric argument acknowledges that manufacturing geometry and microarchitecture are important and were historically responsible for x86's dominance of the PC, server, and HPC market. This view holds that when the advantages of manufacturing prowess and install base are controlled for or nullified, RISC — and by extension, ARM CPUs — will typically prove superior to x86 CPUs. The implementation-centric argument acknowledges that ISA can and does matter, but that historically, microarchitecture and process geometry have mattered more. Intel is still recovering from some of the worst delays in the company's history. AMD is still working to improve Ryzen, especially in mobile. Historically, both x86 manufacturers have demonstrated an ability to compete effectively against RISC CPU manufacturers. Given the reality of CPU design cycles, it's going to be a few years before we really have an answer as to which argument is superior. One difference between the semiconductor market of today and the market of 20 years ago is that TSMC is a much stronger foundry competitor than most of the RISC manufacturers Intel faced in the late 1990s and early 2000s. Intel's 7nm team has got to be under tremendous pressure to deliver on that node. Nothing in this story should be read to imply that an ARM CPU can't be faster and more efficient than an x86 CPU.

Read more...

How Reliable Are Modern CPUs?
Slashdot reader ochinko (user #19,311) shares The Register's report about a recent presentation by Google engineer Peter Hochschild. His team discovered machines with higher-than-expected hardware errors that "showed themselves sporadically, long after installation, and on specific, individual CPU cores rather than entire chips or a family of parts."The Google researchers examining these silent corrupt execution errors (CEEs) concluded "mercurial cores" were to blame CPUs that miscalculated occasionally, under different circumstances, in a way that defied prediction...The errors were not the result of chip architecture design missteps, and they're not detected during manufacturing tests. Rather, Google engineers theorize, the errors have arisen because we've pushed semiconductor manufacturing to a point where failures have become more frequent and we lack the tools to identify them in advance. In a paper titled "Cores that don't count" [PDF], Hochschild and colleagues Paul Turner, Jeffrey Mogul, Rama Govindaraju, Parthasarathy Ranganathan, David Culler, and Amin Vahdat cite several plausible reasons why the unreliability of computer cores is only now receiving attention, including larger server fleets that make rare problems more visible, increased attention to overall reliability, and software development improvements that reduce the rate of software bugs. "But we believe there is a more fundamental cause: ever-smaller feature sizes that push closer to the limits of CMOS scaling, coupled with ever-increasing complexity in architectural design," the researchers state, noting that existing verification methods are ill-suited for spotting flaws that occur sporadically or as a result of physical deterioration after deployment. Facebook has noticed the errors, too. In February, the social ad biz published a related paper, "Silent Data Corruption at Scale," that states, "Silent data corruptions are becoming a more common phenomena in data centers than previously observed...." The risks posed by misbehaving cores include not only crashes, which the existing fail-stop model for error handling can accommodate, but also incorrect calculations and data loss, which may go unnoticed and pose a particular risk at scale. Hochschild recounted an instance where Google's errant hardware conducted what might be described as an auto-erratic ransomware attack. "One of our mercurial cores corrupted encryption," he explained. "It did it in such a way that only it could decrypt what it had wrongly encrypted." How common is the problem? The Register notes that Google's researchers shared a ballpark figure "on the order of a few mercurial cores per several thousand machines similar to the rate reported by Facebook."

Read more...

Apple Working On iPad Pro With Wireless Charging, New iPad Mini
An anonymous reader quotes a report from Bloomberg: Apple is working on a new iPad Pro with wireless charging and the first iPad mini redesign in six years, seeking to continue momentum for a category that saw rejuvenated sales during the pandemic. The Cupertino, California-based company is planning to release the new iPad Pro in 2022 and the iPad mini later this year [...]. The main design change in testing for the iPad Pro is a switch to a glass back from the current aluminum enclosure. The updated iPad mini is planned to have narrower screen borders while the removal of its home button has also been tested. For the new Pro model, the switch to a glass back is being tested, in part, to enable wireless charging for the first time. Making the change in material would bring iPads closer to iPhones, which Apple has transitioned from aluminum to glass backs in recent years. Apple's development work on the new iPad Pro is still early, and the company's plans could change or be canceled before next year's launch [...]. Wireless charging replaces the usual power cable with an inductive mat, which makes it easier for users to top up their device's battery. It has grown into a common feature in smartphones but is a rarity among tablets. Apple added wireless charging to iPhones in 2017 and last year updated it with a magnet-based MagSafe system that ensured more consistent charging speeds. The company is testing a similar MagSafe system for the iPad Pro. Wireless charging will likely be slower than directly plugging in a charger to the iPad's Thunderbolt port, which will remain as part of the next models. As part of its development of the next iPad Pro, Apple is also trying out technology called reverse wireless charging. That would allow users to charge their iPhone or other gadgets by laying them on the back of the tablet. Apple had previously been working on making this possible for the iPhone to charge AirPods and Apple Watches. In addition to the next-generation iPad Pro and iPad mini, Apple is also working on a thinner version of its entry-level iPad geared toward students. That product is planned to be released as early as the end of this year, about the same time as the new iPad mini. Apple is still reportedly working on a technology similar to its failed AirPower, a charging mat designed to simultaneously charge an iPhone, Apple Watch and AirPods. People familiar with the matter said it's also internally investigating alternative wireless charging methods that can work over greater distances than an inductive connection.

Read more...

7-11 Is Opening 500 EV Charging Stations By the End of 2022
7-11 announced Tuesday that it will be placing 500 EV chargers at 250 stores in the U.S. and Canada by the end of 2022. CNET reports: OK, but if they can't keep the Slurpee machine up and running, what kind of charging can users expect? Well, we don't know, and 7-11 isn't saying, but we do know that they will be DC fast-chargers, and it looks like they'll be supplied by ChargePoint, so we'd bet on anything from 60-ish kilowatts to 125 kilowatts. These new chargers will join 7-11's small network of 22 charging stations at 14 stores in four states, and the whole thing is a part of 7-11's ongoing work to reduce its carbon footprint.

Read more...

Samsung Will Shut Down the v1 SmartThings Hub This Month
Samsung is killing the first-generation SmartThings Hub at the end of the month, kicking off phase two of its plan to shut down the SmartThings ecosystem and force users over to in-house Samsung infrastructure. "Phase one was in October, when Samsung killed the Classic SmartThings app and replaced it with a byzantine disaster of an app that it developed in house," writes Ars Technica's Ron Amadeo. "Phase three will see the shutdown of the SmartThings Groovy IDE, an excellent feature that lets members of the community develop SmartThings device handlers and complicated automation apps." From the report: The SmartThings Hub is basically a Wi-Fi access point -- but for your smart home stuff instead of your phones and laptops. Instead of Wi-Fi, SmartThings is the access point for a Zigbee and Z-Wave network, two ultra low-power mesh networks used by smart home devices. [...] The Hub connects your smart home network to the Internet, giving you access to a control app and connecting to other services like your favorite voice assistant. You might think that killing the old Hub could be a ploy to sell more hardware, but Samsung -- a hardware company -- is actually no longer interested in making SmartThings hardware. The company passed manufacturing for the latest "SmartThings Hub (v3)" to German Internet-of-things company Aeotec. The new Hub is normally $125, but Samsung is offering existing users a dirt-cheat $35 upgrade price. For users who have to buy a new hub, migrating between hubs in the SmartThings ecosystem is a nightmare. Samsung doesn't provide any kind of migration program, so you have to unpair every single individual smart device from your old hub to pair it to the new one. This means you'll need to perform some kind of task on every light switch, bulb, outlet, and sensor, and you'll have to do the same for any other smart thing you've bought over the years. Doing this on each device is a hassle that usually involves finding the manual to look up the secret "exclusion" input, which is often some arcane Konami code. Picture holding the top button on a paddle light for seven seconds until a status light starts blinking and then opening up the SmartThings app to unpair it. Samsung is also killing the "SmartThings Link for Nvidia Shield" dongle, which let users turn Android TV devices into SmartThings Hubs.

Read more...

Bill Gates' Next Generation Nuclear Reactor To Be Built In Wyoming
Billionaire Bill Gates' advanced nuclear reactor company TerraPower LLC and PacifiCorp have selected Wyoming to launch the first Natrium reactor project on the site of a retiring coal plant, the state's governor said on Wednesday. Reuters reports: TerraPower, founded by Gates about 15 years ago, and power company PacifiCorp, owned by Warren Buffet's Berkshire Hathaway, said the exact site of the Natrium reactor demonstration plant is expected to be announced by the end of the year. Small advanced reactors, which run on different fuels than traditional reactors, are regarded by some as a critical carbon-free technology than can supplement intermittent power sources like wind and solar as states strive to cut emissions that cause climate change. The project features a 345 megawatt sodium-cooled fast reactor with molten salt-based energy storage that could boost the system's power output to 500 MW during peak power demand. TerraPower said last year that the plants would cost about $1 billion. Late last year the U.S. Department of Energy awarded TerraPower $80 million in initial funding to demonstrate Natrium technology, and the department has committed additional funding in coming years subject to congressional appropriations.

Read more...

This site ©Copyright 2001-2010 Overclockers Melbourne. All content contained within this site is property of the author(s) and may not be copied in part or in full without the express written consent of the webmaster and the author(s). Overclockers Melbourne can not and will not be held responsible for any downtime or harm done to your system through the following of any guides written, or linked to, by this site.