Monday, December 23, 2024

Elon Musk’s xAI supercomputer gets 150MW power boost despite concerns over grid impact and local power stability


  • Elon Musk's xAI supercomputer gets power boost amid concerns
  • 150MW approval raises questions about grid reliability in Tennessee
  • Local stakeholders voice concerns over growing data center demands

Elon Musk’s xAI supercomputer has taken a major step forward with approval for 150 megawatts of power from the Tennessee Valley Authority (TVA).

This approval significantly boosts the facility’s energy supply, enabling it to run all 100,000 of its GPUs concurrently, a feat previously limited by available power.

However, this massive energy demand has raised concerns among local stakeholders regarding the impact on the region's power grid.

xAI expands power use

When xAI first launched its supercomputer in July 2024, it required significantly more energy than was available. Initially, only 8MW of power was available at the site, which was insufficient to meet the demands of the AI data center.

Musk’s team improvised by using portable power stations to fill the gap. Over the summer, Memphis Light, Gas & Water (MLGW), a local utility company, upgraded the existing substation to provide 50MW of power, still far short of the requirements to fully operate the facility.

The xAI supercomputer, nicknamed the “Gigafactory of Compute,” is designed to support Musk’s artificial intelligence company. To run all of its 100,000 GPUs simultaneously, the data center needs an estimated 155MW of power, meaning the new approval for 150MW is just enough to get close to full capacity.

With approval for an additional 150MW, MLGW and TVA have worked to assure local residents that the increased demand from xAI will not negatively impact power reliability in the Memphis area. According to MLGW’s CEO Doug McGowen, the additional power needed for xAI’s operations is still within the utility’s peak load forecast, and measures are in place to buy more energy from TVA if necessary.

To meet these growing energy needs, many tech companies, including Amazon, Google, Microsoft, and Oracle, are investing in alternative energy sources, particularly nuclear power. However, it will take at least five years before nuclear energy solutions are ready for widespread deployment.

Until then, companies like xAI must rely on existing infrastructure to power their data centers, raising concerns about grid stability and the ability to keep up with increasing demands.

“We are alarmed that the TVA Board rubberstamped xAI’s request for power without studying the impact it will have on local communities,” says Southern Environmental Law Center senior attorney Amanda Garcia.

“Board members expressed concern about the impact large industrial energy users have on power bills across the Tennessee Valley. TVA should be prioritizing families over data centers like xAI," Garcia notes.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/XjC3iso

How to send a personal video message from Santa using AI

Want to send a special message straight from the North Pole this year? AI video developer Synthesia has you covered with festive greetings with a dash of AI magic. You can get a digital Santa Claus speaking right to you or whoever you wish using Synthesia’s AI-powered video platform,

The personalized video messages stars a lifelike AI-generated Santa and even less tech-savvy well-wishers can use it easily. You can pick from an array of templates showing cozy living rooms adorned with Christmas trees and a comfy chair with Santa sitting and sharing your message. Synthesia’s virtual elves work their magic and your message is sent. Your heartfelt greeting is processed with Synthesia’s platform of advanced AI-powered text-to-speech and video generation technology. Santa is the latest of Synthesia's more than 230 pre-designed AI avatars, including custom creations.

Synthesia has the most comprehensive AI Santa message, but it's not alone. OpenAI debuted Santa Mode for ChatGPT last week, giving the AI chatbot a simulated version of Santa's voice for Advanced Voice Mode, which is described as "merry and bright."

Santa delivers a dose of Christmas spirit with striking realism and can speak 140 different languages. To maintain its family-friendly charm, Synthesia screens all user-submitted scripts to prevent any untoward or non-jolly messages. You can see my example below.

How to send a message from Santa

If you want to send a video from Santa, go to this website then:

1. Choose a Template: Visit Synthesia's Santa video generator page and select from festive templates.

2. Craft Your Message: Write a personalized message for your recipient. If you're unsure what to say, consider using an AI writing assistant for inspiration.

3. Submit and Generate: After finalizing your message, submit it through the platform. In just a few minutes, Synthesia's AI processes the text, generating a lifelike video featuring Santa delivering your message.

4. Share the Joy: Once the video is ready, it will be emailed directly to you. You can then share it with your loved ones, bringing a personalized touch to your holiday greetings.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/sK42cDS

A $100bn tech company you've probably never heard of is teaming up with the world's biggest memory manufacturers to produce supercharged HBM


  • HBM is fundamental to the AI revolution as it allows ultra fast data transfer close to the GPU
  • Scaling HBM performance is difficult if it sticks to JEDEC protocols
  • Marvell and others wants to develop a custom HBM architecture to accelerate its development

Marvell Technology has unveiled a custom HBM compute architecture designed to increase the efficiency and performance of XPUs, a key component in the rapidly evolving cloud infrastructure landscape.

The new architecture, developed in collaboration with memory giants Micron, Samsung, and SK Hynix, aims to address limitations in traditional memory integration by offering tailored solutions for next-generation data center needs.

The architecture focuses on improving how XPUs - used in advanced AI and cloud computing systems - handle memory. By optimizing the interfaces between AI compute silicon dies and High Bandwidth Memory stacks, Marvell claims the technology reduces power consumption by up to 70% compared to standard HBM implementations.

Moving away from JEDEC

Additionally, its redesign reportedly decreases silicon real estate requirements by as much as 25%, allowing cloud operators to expand compute capacity or include more memory. This could potentially allow XPUs to support up to 33% more HBM stacks, massively boosting memory density.

“The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered,” Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell said.

“We’re very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era.”

HBM plays a central role in XPUs, which use advanced packaging technology to integrate memory and processing power. Traditional architectures, however, limit scalability and energy efficiency.

Marvell’s new approach modifies the HBM stack itself and its integration, aiming to deliver better performance for less power and lower costs - key considerations for hyperscalers who are continually seeking to manage rising energy demands in data centers.

ServeTheHome’s Patrick Kennedy, who reported the news live from Marvell Analyst Day 2024, noted the cHBM (custom HBM) is not a JEDEC solution and so will not be standard off the shelf HBM.

“Moving memory away from JEDEC standards and into customization for hyperscalers is a monumental move in the industry,” he writes. “This shows Marvell has some big hyperscale XPU wins since this type of customization in the memory space does not happen for small orders.”

The collaboration with leading memory makers reflects a broader trend in the industry toward highly customized hardware.

“Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era,” said Raj Narasimhan, senior vice president and general manager of Micron’s Compute and Networking Business Unit.

“Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron’s industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI.”

More from TechRadar Pro



from Latest from TechRadar US in News,opinion https://ift.tt/bZRyjXo

Sunday, December 22, 2024

From lab to life - atomic-scale memristors pave the way for brain-like AI and next-gen computing power


  • Memristors to bring brain-like computing to AI systems
  • Atomically tunable devices offer energy-efficient AI processing
  • Neuromorphic circuits open new possibilities for artificial intelligence

A new frontier in semiconductor technology could be closer than ever after the development of atomically tunable "memristors” which are cutting-edge memory resistors that emulate the human brain's neural network.

With funding from the National Science Foundation’s Future of Semiconductors program (FuSe2), this initiative aims to create devices that enable neuromorphic computing - a next-generation approach designed for high-speed, energy-efficient processing that mimics the brain’s ability to learn and adapt.

At the core of this innovation is the creation of ultrathin memory devices with atomic-scale control, potentially revolutionizing AI by allowing memristors to act as artificial synapses and neurons. These devices have the potential to significantly enhance computing power and efficiency, opening new possibilities for artificial intelligence applications, all while training a new generation of experts in semiconductor technology.

Neuromorphic computing challenges

The project focuses on solving one of the most fundamental challenges in modern computing: achieving the precision and scalability needed to bring brain-inspired AI systems to life.

To develop energy-efficient, high-speed networks that function like the human brain, memristors are the key components. They can store and process information simultaneously, making them particularly suited to neuromorphic circuits where they can facilitate the type of parallel data processing seen in biological brains, potentially overcoming limitations in traditional computing architectures.

The joint research effort between the University of Kansas (KU) and the University of Houston led by Judy Wu, a distinguished Professor of Physics and Astronomy at KU is supported by a $1.8 million grant from FuSe2.

Wu and her team have pioneered a method for achieving sub-2-nanometer thickness in memory devices, with film layers approaching an astonishing 0.1 nanometers — approximately 10 times thinner than the average nanometer scale.

These advancements are crucial for future semiconductor electronics, as they allow for the creation of devices that are both extremely thin and capable of precise functionality, with large-area uniformity. The research team will also use a co-design approach that integrates material design, fabrication, and testing.

In addition to its scientific aims, the project also has a strong focus on workforce development. Recognizing the growing need for skilled professionals in the semiconductor industry, the team has designed an educational outreach component led by experts from both universities.

“The overarching goal of our work is to develop atomically ‘tunable’ memristors that can act as neurons and synapses on a neuromorphic circuit. By developing this circuit, we aim to enable neuromorphic computing. This is the primary focus of our research," said Wu.

"We want to mimic how our brain thinks, computes, makes decisions and recognizes patterns — essentially, everything the brain does with high speed and high energy efficiency."

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/rl3W9Ac

New Androxgh0st botnet targets vulnerabilities in IoT devices and web applications via Mozi integration

  • Androxgh0st’s integration with Mozi amplifies global risks
  • IoT vulnerabilities are the new battleground for cyberattacks
  • Proactive monitoring is essential to combat emerging botnet threats

Researchers have recently identified a major evolution in the Androxgh0st botnet, which has grown more dangerous with the integration of the Mozi botnet’s capabilities.

What began as a web server-targeted attack in early 2024 has now expanded, allowing Androxgh0st to exploit vulnerabilities in IoT devices, CloudSEK’s Threat Research team has said.

Its latest report claims the botnet is now equipped with Mozi’s advanced techniques for infecting and spreading across a wide range of networked devices.

The resurgence of Mozi: A unified botnet infrastructure

Mozi, previously known for infecting IoT devices like Netgear and D-Link routers, was believed to be inactive following a killswitch activation in 2023.

However, CloudSEK has revealed Androxgh0st has integrated Mozi’s propagation capabilities, significantly amplifying its potential to target IoT devices.

By deploying Mozi’s payloads, Androxgh0st now has a unified botnet infrastructure that leverages specialized tactics to infiltrate IoT networks. This fusion enables the botnet to spread more efficiently through vulnerable devices, including routers and other connected technology, making it a more formidable force.

Beyond its integration with Mozi, Androxgh0st has expanded its range of targeted vulnerabilities, exploiting weaknesses in critical systems. CloudSEK’s analysis shows Androxgh0st is now actively attacking major technologies, including Cisco ASA, Atlassian JIRA, and several PHP frameworks.

In Cisco ASA systems, the botnet exploits cross-site scripting (XSS) vulnerabilities, injecting malicious scripts through unspecified parameters. It also targets Atlassian JIRA with a path traversal vulnerability (CVE-2021-26086), allowing attackers to gain unauthorized access to sensitive files. In PHP frameworks, Androxgh0st exploits older vulnerabilities such as those in Laravel (CVE-2018-15133) and PHPUnit (CVE-2017-9841), facilitating backdoor access to compromised systems.

Androxgh0st’s threat landscape is not limited to older vulnerabilities. It is also capable of exploiting newly discovered vulnerabilities, such as CVE-2023-1389 in TP-Link Archer AX21 firmware, which allows for unauthenticated command execution, and CVE-2024-36401 in GeoServer, a vulnerability that can lead to remote code execution.

The botnet now also uses brute-force credential stuffing, command injection, and file inclusion techniques to compromise systems. By leveraging Mozi’s IoT-focused tactics, it has significantly widened its geographical impact, spreading its infections across regions in Asia, Europe, and beyond.

CloudSEK recommends that organizations strengthen their security posture to mitigate potential attacks. While immediate patching is essential, proactive monitoring of network traffic is also important. By tracking suspicious outbound connections and detecting anomalous login attempts, particularly from IoT devices, organizations can spot early signs of an Androxgh0st-Mozi collaboration.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/EvcNFdD

TrueNAS device vulnerabilities exposed during hacking competition

  • TrueNAS recommends hardening systems to mitigate risks
  • Pwn2Own showcases diverse attack vectors on NAS systems
  • Cybersecurity teams earn over $1 million by finding in exploits

At the recent Pwn2Own Ireland 2024 event, security researchers identified vulnerabilities in various high-use devices, including network-attached storage NAS devices, cameras, and other connected products.

TrueNAS was one of the companies whose products were successfully targeted during the event, with vulnerabilities found in its products with default, non-hardened configurations.

Following the competition, TrueNAS have started implementing updates to secure their products against these newly discovered vulnerabilities.

Security gaps across multiple devices

During the competition, multiple teams successfully exploited TrueNAS Mini X devices, demonstrating the potential for attackers to leverage interconnected vulnerabilities between different network devices. Notably, the Viettel Cyber Security team earned $50,000 and 10 Master of Pwn points by chaining SQL injection and authentication bypass vulnerabilities from a QNAP router to the TrueNAS device.

Furthermore, the Computest Sector 7 team also executed a successful attack by exploiting both a QNAP router and a TrueNAS Mini X using four vulnerabilities. The types of vulnerabilities included command injection, SQL injection, authentication bypass, improper certificate validation, and hardcoded cryptographic keys.

TrueNAS responded to the results by releasing an advisory for its users, acknowledging the vulnerabilities and emphasizing the importance of following security recommendations to protect data storage systems against potential exploits.

By adhering to these guidelines, users can increase their defences, making it harder for attackers to leverage known vulnerabilities.

TrueNAS informed customers that the vulnerabilities affected default, non-hardened installations, meaning that users who follow recommended security practices are already at a reduced risk.

TrueNAS has advised all users to review its security guidance and implement best practices, which can significantly minimize exposure to potential threats until the patches are fully rolled out.

Via SecurityWeek

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/kb2YD1p

Could this be Dell's fastest laptop ever built? Dell Pro Max 18 Plus set to have 'RTX 5000 class' GPU capabilities and Tandem OLED display


  • First look at Dell Pro Max 18 Plus emerges in new images
  • Pictures show a completely redesigned mobile workstation laptop
  • Pro Max could either replace popular Precision range or be a whole new range, offering up to 256GB RAM and up to 16TB SSD

Leaked details have suggest Dell is developing a new addition to its workstation offerings designed to deliver high-performance capabilities for professional workloads.

Available in two sizes, the Dell Pro Max 18 Plus is expected to debut officially at CES 2025 and could either replace the popular Precision range or form an entirely new lineup.

The device allegedly features an 18-inch display, while the Pro Max 16 Plus provides a smaller 16-inch alternative with similar specifications. According to information shared by Song1118 on Weibo, which includes Dell marketing slides, the laptops will be powered by Intel’s upcoming Core Ultra 200HX “Arrow Lake-HX” CPUs. For graphics, the series will reportedly feature Nvidia’s Ada-based RTX 5000-class workstation GPUs, though the exact model isn’t named in the leaked documents.

Triple-fan cooling system

The Pro Max series is set to offer up to 200 watts for the CPU/GPU combination in the 18-inch version and 170 watts in the 16-inch model. VideoCardz notes that while we have already seen much higher targets in ultra-high-end gaming machines, “this would be the first laptop confirmed to offer 200W for a next-gen Intel/Nvidia combo.”

The laptops will reportedly support up to 256GB of CAMM2 memory. The 18-inch model can accommodate up to 16TB of storage via four M.2 2280 SSD slots, while the 16-inch version supports 12TB with three slots. The heat generated by these high-power components will be managed by an “industry first” triple-fan cooling system.

Additional features look to include a magnesium alloy body to reduce weight, an 8MP camera, and a tandem OLED display option. Connectivity options include Thunderbolt 5 (80/120Gbps), WiFi 7, Bluetooth 5.4, and optional 5G WWAN. The two laptops also feature a quick-access bottom cover for easy serviceability and repairability of key components like batteries, memory, and storage.

The Dell Pro Max 16/18 Plus laptops are expected to be officially unveiled along with pricing at CES on January 7, 2025, with a mid-2025 release window.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/Gka3U1m

Saturday, December 21, 2024

This new compact mini PC can support Intel 12th to 14th Gen processors and up to 96 GB DDR5 RAM

  • Shuttle XH610G2 offers compact design supporting Intel Core processors up to 24 cores
  • Exclusive heat pipe technology ensures reliable operation in demanding environments
  • Flexible storage options include M.2 slots and SATA interfaces

Shuttle has released its latest mini PC, aimed at meeting the diverse demands of modern commercial tasks.

With a small 5-liter chassis and a compact design measuring just 250mm x 200mm x 95mm, the Shuttle XH610G2 employs the Intel H610 chipset, making it compatible with a broad spectrum of Intel Core processors, from the latest 14th Gen models back to the 12th Gen series.

The company says the device is designed to handle applications that require significant computational power like image recognition, 3D video creation, and AI data processing.

Shuttle XH610G2

The Shuttle XH610G2 comes with an exclusive heat pipe cooling technology which allows the workstation to operate reliably even in demanding environments, being capable of withstanding temperatures from 0 to 50 degrees Celsius, making it suitable for continuous operation in various commercial settings.

The Shuttle XH610G2 can accommodate Intel Core models with up to 24 cores and a peak clock speed of 5.8GHz. This processing power allows the workstation to handle intensive tasks while staying within a 65W thermal design power (TDP) limit. The graphics are enhanced by the integrated Intel UHD graphics with Xe architecture, offering capabilities to manage demanding visual applications, from high-quality media playback to 4K triple-display setups. The inclusion of dual HDMI 2.0b ports and a DisplayPort output facilitates independent 4K display support.

The XH610G2 offers extensive customization and scalability with support for dual PCIe slots, one x16 and one x1, allowing users to install discrete graphics cards or other high-performance components like video capture cards.

For memory, the XH610G2 supports up to 64GB of DDR5-5600 SO-DIMM memory, split across two slots, making ideal for resource-intensive applications, providing the system with the necessary power to handle complex computational tasks efficiently. Running at a low 1.1V, this memory configuration also minimizes energy consumption, which can be a significant advantage in environments conscious of power usage.

In terms of storage, this device features a SATA 6.0Gb/s interface for a 2.5-inch SSD or HDD, along with two M.2 slots for NVMe and SATA storage options. Users are recommended to choose a SATA SSD over a traditional HDD to ensure faster performance.

The I/O options on the XH610G2 further enhance its flexibility, with four USB 3.2 Gen 1 ports, two Ethernet ports, one supporting 1GbE and another 2.5GbE, and an optional RS232 COM port offering enhanced compatibility for specialized peripheral connections, which can be particularly useful in industrial or legacy environments.

Furthermore, the compact chassis includes M.2 expansion slots for both WLAN and LTE adapters, providing options for wireless connectivity that can be critical in setups where wired connections are not feasible.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/oQXtzAa

CAMM2 memory modules promise significant advancements in memory technology with impressive read and write speeds

  • TeamGroup claims CAMM2 memory promises high-speed DDR5 performance
  • Revolutionary design offers dual-channel operation in a single module
  • Limited motherboard compatibility poses challenges for CAMM2 adoption

TeamGroup has introduced its Compression Attached Memory Module 2 (CAMM2), promising high-speed DDR5 performance with its new T-Create lineup.

The company says CAMM2 features a revolutionary design that offers significant advantages over traditional memory types like SO-DIMM, U-DIMM, and R-DIMM. It supports dual-channel operation with just one module, streamlining system architecture and lowering power consumption.

The built-in Client Clock Driver (CKD) boosts signal integrity, making CAMM2 well-suited for slim notebooks while its optimized thermal design enhances heat dissipation, allowing higher performance despite the smaller form factor.

CAMM2-compatible motherboards are very scarce

The T-Create CAMM2 modules are designed with DDR5-7200 specifications and a CAS latency of CL34-42-42-84, delivering remarkable read, write, and copy speeds of up to 117GB/s, 108GB/s, and 106GB/s, respectively.

This performance is achieved through manual overclocking, which has driven latency down to 55ns, a significant reduction compared to typical DDR5 JEDEC specifications. TeamGroup is now focused on pushing boundaries and the company says it is working to achieve even faster speeds, aiming to reach DDR5-8000 and even DDR5-9000 in future iterations.

One major setback for TeamGroup lies in the availability of CAMM2-compatible motherboards, which are currently limited. The T-Create CAMM2 memory was tested on MSI’s Z790 Project Zero, one of the few boards currently compatible with this new form factor.

Other brands, such as Gigabyte, hint at possible CAMM2-enabled designs, like an upcoming TACHYON board. However, the CAMM2 ecosystem is still emerging, and widespread adoption may depend on the release of more compatible boards and competitive pricing.

Nevertheless, TeamGroup expects to launch the first-generation T-Create CAMM2 modules by Q1 2025, with broader motherboard support potentially arriving as manufacturers introduce new CPU platforms. With AMD and Intel rumoured to announce budget-friendly CPUs at CES 2025, the rollout of mid-range boards compatible with CAMM2 could align with TeamGroup’s release plans, potentially helping CAMM2 secure a foothold in the market.

CAMM2 offers a couple of advantages over the widely used SO-DIMM, UDIMM, and RDIMM standards. Notably, CAMM2 modules operate in dual-channel mode while only occupying a single physical slot. Furthermore, they incorporate a Client Clock Driver (CKD), similar to CUDIMM memory, which bolsters signal integrity at high speeds, allowing for more reliable and faster memory performance.

These features make CAMM2 particularly appealing for laptops, which often face limitations with current SO-DIMM speeds or non-upgradeable LPDDR5/5X options.

Via Tom's Hardware

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/jS413Wy

We may have to wait longer for the OnePlus Open 2 than we thought


  • We might not see the OnePlus Open 2 until later in 2025
  • Previous leaks predicted a Q1 2025 launch
  • Major upgrades have been rumored for the foldable

A quick browse through our OnePlus Open review will tell you why we're very much looking forward to the foldable phone's successor – though if a new leak is to be believed, the wait for the OnePlus Open 2 might be longer than originally thought.

According to tipster Sanju Choudhary (via GSMArena), the handset is going to break cover during the second half of next year – anytime from July onwards. That contradicts an earlier rumor that it would be unveiled in the first three months of 2025.

There's no indication whether or not OnePlus has changed its plans, or if the launch date was originally set for the first quarter of next year and has since been pushed back (engineering foldable phones is a tricky challenge, after all).

It's also fair to say that none of these rumors can be confirmed until OnePlus actually makes its announcement. The original OnePlus Open was launched in October 2023, which doesn't really tell us much about a schedule for its successor.

Upgrades on the way

Whenever the next OnePlus folding phone shows up, it sounds like it's going to be worth the wait – which has lasted 14 months and counting. Rumors have pointed to major upgrades in terms of the rear camera and the internal components.

We've also heard that the OnePlus Open 2 will have the biggest battery ever seen in a foldable, as well as being thinner and more waterproof than the handset it's replacing. That's a significant number of improvements to look forward to.

In our OnePlus Open review, we described the phone as "the only foldable phone that doesn't compromise", and there was particular praise for the design and the camera setup – so the upcoming upgrade has a lot to live up to.

Before we see another foldable from OnePlus, we'll see the OnePlus 13 and the OnePlus 13R made available worldwide: OnePlus has confirmed this is happening on January 7, so we could also get a teaser for the OnePlus Open 2 at the same time.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/2P0qQkI

Friday, December 20, 2024

Only 15% of Steam users have played games released in 2024, but why?

  • 15% of Steam users' playtime dedicated to 2024 games
  • 47% of playtime on games up to eight years old
  • Many reasons for this, including more older games to play

Steam’s end-of-the-year review has always revealed some fascinating PC gaming trends and this year’s is no exception. According to 2024’s stats, only 15% of Steam users spent their total playing time on games that launched in 2024.

Looking further at the data that PC Gamer reports on, 47% of the total playing time on Steam was spent on games released in the last seven years, while 37% of that time was spent on games that launched eight years or more ago. Now the question is, why and what does this mean?

One possible explanation is that gamers could be focusing more on their backlogs rather than new releases. We do know that playtime for current releases is higher this year than in 2023, as there was an increase from 9% to 15%, which means players are buying new titles at least. There are other possibilities for this trend as well.

Other possibilities for this statistic

One reason could be that older games are easier to access due to their cheaper prices, especially due to the many Steam sales. There’s also the influence of the Steam Deck and what’s considered ‘Steam Deck playable,’ since many recent AAA games may be too demanding for a portable PC.

There’s also the fact that older live service games like Counter-Strike, Dota 2, and PUBG have made up Steam's Most Played charts, while newer titles have an incredibly difficult time breaking through and building a player base.

Another reason is that Steam has over 200,000 titles released over the course of decades, compared to the relatively paltry 18,000 games released in 2024 according to SteamDB. So naturally, more users will spend more time playing older games versus recent ones.

Regardless, 15% of playtime dedicated to new games is rather impressive, compared to 2022’s 17% stat. It means that the numbers are recovering after the massive dip in 2023. Hopefully next year we’ll see another increase, as gamers delve into more new titles.

You might also like...



from Latest from TechRadar US in News,opinion https://ift.tt/gnqc8NF

Elon Musk’s xAI supercomputer gets 150MW power boost despite concerns over grid impact and local power stability

Elon Musk's xAI supercomputer gets power boost amid concerns 150MW approval raises questions about grid reliability in Tennessee Lo...