Monday, December 23, 2024

Elon Musk’s xAI supercomputer gets 150MW power boost despite concerns over grid impact and local power stability


  • Elon Musk's xAI supercomputer gets power boost amid concerns
  • 150MW approval raises questions about grid reliability in Tennessee
  • Local stakeholders voice concerns over growing data center demands

Elon Musk’s xAI supercomputer has taken a major step forward with approval for 150 megawatts of power from the Tennessee Valley Authority (TVA).

This approval significantly boosts the facility’s energy supply, enabling it to run all 100,000 of its GPUs concurrently, a feat previously limited by available power.

However, this massive energy demand has raised concerns among local stakeholders regarding the impact on the region's power grid.

xAI expands power use

When xAI first launched its supercomputer in July 2024, it required significantly more energy than was available. Initially, only 8MW of power was available at the site, which was insufficient to meet the demands of the AI data center.

Musk’s team improvised by using portable power stations to fill the gap. Over the summer, Memphis Light, Gas & Water (MLGW), a local utility company, upgraded the existing substation to provide 50MW of power, still far short of the requirements to fully operate the facility.

The xAI supercomputer, nicknamed the “Gigafactory of Compute,” is designed to support Musk’s artificial intelligence company. To run all of its 100,000 GPUs simultaneously, the data center needs an estimated 155MW of power, meaning the new approval for 150MW is just enough to get close to full capacity.

With approval for an additional 150MW, MLGW and TVA have worked to assure local residents that the increased demand from xAI will not negatively impact power reliability in the Memphis area. According to MLGW’s CEO Doug McGowen, the additional power needed for xAI’s operations is still within the utility’s peak load forecast, and measures are in place to buy more energy from TVA if necessary.

To meet these growing energy needs, many tech companies, including Amazon, Google, Microsoft, and Oracle, are investing in alternative energy sources, particularly nuclear power. However, it will take at least five years before nuclear energy solutions are ready for widespread deployment.

Until then, companies like xAI must rely on existing infrastructure to power their data centers, raising concerns about grid stability and the ability to keep up with increasing demands.

“We are alarmed that the TVA Board rubberstamped xAI’s request for power without studying the impact it will have on local communities,” says Southern Environmental Law Center senior attorney Amanda Garcia.

“Board members expressed concern about the impact large industrial energy users have on power bills across the Tennessee Valley. TVA should be prioritizing families over data centers like xAI," Garcia notes.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/XjC3iso

How to send a personal video message from Santa using AI

Want to send a special message straight from the North Pole this year? AI video developer Synthesia has you covered with festive greetings with a dash of AI magic. You can get a digital Santa Claus speaking right to you or whoever you wish using Synthesia’s AI-powered video platform,

The personalized video messages stars a lifelike AI-generated Santa and even less tech-savvy well-wishers can use it easily. You can pick from an array of templates showing cozy living rooms adorned with Christmas trees and a comfy chair with Santa sitting and sharing your message. Synthesia’s virtual elves work their magic and your message is sent. Your heartfelt greeting is processed with Synthesia’s platform of advanced AI-powered text-to-speech and video generation technology. Santa is the latest of Synthesia's more than 230 pre-designed AI avatars, including custom creations.

Synthesia has the most comprehensive AI Santa message, but it's not alone. OpenAI debuted Santa Mode for ChatGPT last week, giving the AI chatbot a simulated version of Santa's voice for Advanced Voice Mode, which is described as "merry and bright."

Santa delivers a dose of Christmas spirit with striking realism and can speak 140 different languages. To maintain its family-friendly charm, Synthesia screens all user-submitted scripts to prevent any untoward or non-jolly messages. You can see my example below.

How to send a message from Santa

If you want to send a video from Santa, go to this website then:

1. Choose a Template: Visit Synthesia's Santa video generator page and select from festive templates.

2. Craft Your Message: Write a personalized message for your recipient. If you're unsure what to say, consider using an AI writing assistant for inspiration.

3. Submit and Generate: After finalizing your message, submit it through the platform. In just a few minutes, Synthesia's AI processes the text, generating a lifelike video featuring Santa delivering your message.

4. Share the Joy: Once the video is ready, it will be emailed directly to you. You can then share it with your loved ones, bringing a personalized touch to your holiday greetings.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/sK42cDS

A $100bn tech company you've probably never heard of is teaming up with the world's biggest memory manufacturers to produce supercharged HBM


  • HBM is fundamental to the AI revolution as it allows ultra fast data transfer close to the GPU
  • Scaling HBM performance is difficult if it sticks to JEDEC protocols
  • Marvell and others wants to develop a custom HBM architecture to accelerate its development

Marvell Technology has unveiled a custom HBM compute architecture designed to increase the efficiency and performance of XPUs, a key component in the rapidly evolving cloud infrastructure landscape.

The new architecture, developed in collaboration with memory giants Micron, Samsung, and SK Hynix, aims to address limitations in traditional memory integration by offering tailored solutions for next-generation data center needs.

The architecture focuses on improving how XPUs - used in advanced AI and cloud computing systems - handle memory. By optimizing the interfaces between AI compute silicon dies and High Bandwidth Memory stacks, Marvell claims the technology reduces power consumption by up to 70% compared to standard HBM implementations.

Moving away from JEDEC

Additionally, its redesign reportedly decreases silicon real estate requirements by as much as 25%, allowing cloud operators to expand compute capacity or include more memory. This could potentially allow XPUs to support up to 33% more HBM stacks, massively boosting memory density.

“The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered,” Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell said.

“We’re very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era.”

HBM plays a central role in XPUs, which use advanced packaging technology to integrate memory and processing power. Traditional architectures, however, limit scalability and energy efficiency.

Marvell’s new approach modifies the HBM stack itself and its integration, aiming to deliver better performance for less power and lower costs - key considerations for hyperscalers who are continually seeking to manage rising energy demands in data centers.

ServeTheHome’s Patrick Kennedy, who reported the news live from Marvell Analyst Day 2024, noted the cHBM (custom HBM) is not a JEDEC solution and so will not be standard off the shelf HBM.

“Moving memory away from JEDEC standards and into customization for hyperscalers is a monumental move in the industry,” he writes. “This shows Marvell has some big hyperscale XPU wins since this type of customization in the memory space does not happen for small orders.”

The collaboration with leading memory makers reflects a broader trend in the industry toward highly customized hardware.

“Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era,” said Raj Narasimhan, senior vice president and general manager of Micron’s Compute and Networking Business Unit.

“Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron’s industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI.”

More from TechRadar Pro



from Latest from TechRadar US in News,opinion https://ift.tt/bZRyjXo

Sunday, December 22, 2024

From lab to life - atomic-scale memristors pave the way for brain-like AI and next-gen computing power


  • Memristors to bring brain-like computing to AI systems
  • Atomically tunable devices offer energy-efficient AI processing
  • Neuromorphic circuits open new possibilities for artificial intelligence

A new frontier in semiconductor technology could be closer than ever after the development of atomically tunable "memristors” which are cutting-edge memory resistors that emulate the human brain's neural network.

With funding from the National Science Foundation’s Future of Semiconductors program (FuSe2), this initiative aims to create devices that enable neuromorphic computing - a next-generation approach designed for high-speed, energy-efficient processing that mimics the brain’s ability to learn and adapt.

At the core of this innovation is the creation of ultrathin memory devices with atomic-scale control, potentially revolutionizing AI by allowing memristors to act as artificial synapses and neurons. These devices have the potential to significantly enhance computing power and efficiency, opening new possibilities for artificial intelligence applications, all while training a new generation of experts in semiconductor technology.

Neuromorphic computing challenges

The project focuses on solving one of the most fundamental challenges in modern computing: achieving the precision and scalability needed to bring brain-inspired AI systems to life.

To develop energy-efficient, high-speed networks that function like the human brain, memristors are the key components. They can store and process information simultaneously, making them particularly suited to neuromorphic circuits where they can facilitate the type of parallel data processing seen in biological brains, potentially overcoming limitations in traditional computing architectures.

The joint research effort between the University of Kansas (KU) and the University of Houston led by Judy Wu, a distinguished Professor of Physics and Astronomy at KU is supported by a $1.8 million grant from FuSe2.

Wu and her team have pioneered a method for achieving sub-2-nanometer thickness in memory devices, with film layers approaching an astonishing 0.1 nanometers — approximately 10 times thinner than the average nanometer scale.

These advancements are crucial for future semiconductor electronics, as they allow for the creation of devices that are both extremely thin and capable of precise functionality, with large-area uniformity. The research team will also use a co-design approach that integrates material design, fabrication, and testing.

In addition to its scientific aims, the project also has a strong focus on workforce development. Recognizing the growing need for skilled professionals in the semiconductor industry, the team has designed an educational outreach component led by experts from both universities.

“The overarching goal of our work is to develop atomically ‘tunable’ memristors that can act as neurons and synapses on a neuromorphic circuit. By developing this circuit, we aim to enable neuromorphic computing. This is the primary focus of our research," said Wu.

"We want to mimic how our brain thinks, computes, makes decisions and recognizes patterns — essentially, everything the brain does with high speed and high energy efficiency."

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/rl3W9Ac

New Androxgh0st botnet targets vulnerabilities in IoT devices and web applications via Mozi integration

  • Androxgh0st’s integration with Mozi amplifies global risks
  • IoT vulnerabilities are the new battleground for cyberattacks
  • Proactive monitoring is essential to combat emerging botnet threats

Researchers have recently identified a major evolution in the Androxgh0st botnet, which has grown more dangerous with the integration of the Mozi botnet’s capabilities.

What began as a web server-targeted attack in early 2024 has now expanded, allowing Androxgh0st to exploit vulnerabilities in IoT devices, CloudSEK’s Threat Research team has said.

Its latest report claims the botnet is now equipped with Mozi’s advanced techniques for infecting and spreading across a wide range of networked devices.

The resurgence of Mozi: A unified botnet infrastructure

Mozi, previously known for infecting IoT devices like Netgear and D-Link routers, was believed to be inactive following a killswitch activation in 2023.

However, CloudSEK has revealed Androxgh0st has integrated Mozi’s propagation capabilities, significantly amplifying its potential to target IoT devices.

By deploying Mozi’s payloads, Androxgh0st now has a unified botnet infrastructure that leverages specialized tactics to infiltrate IoT networks. This fusion enables the botnet to spread more efficiently through vulnerable devices, including routers and other connected technology, making it a more formidable force.

Beyond its integration with Mozi, Androxgh0st has expanded its range of targeted vulnerabilities, exploiting weaknesses in critical systems. CloudSEK’s analysis shows Androxgh0st is now actively attacking major technologies, including Cisco ASA, Atlassian JIRA, and several PHP frameworks.

In Cisco ASA systems, the botnet exploits cross-site scripting (XSS) vulnerabilities, injecting malicious scripts through unspecified parameters. It also targets Atlassian JIRA with a path traversal vulnerability (CVE-2021-26086), allowing attackers to gain unauthorized access to sensitive files. In PHP frameworks, Androxgh0st exploits older vulnerabilities such as those in Laravel (CVE-2018-15133) and PHPUnit (CVE-2017-9841), facilitating backdoor access to compromised systems.

Androxgh0st’s threat landscape is not limited to older vulnerabilities. It is also capable of exploiting newly discovered vulnerabilities, such as CVE-2023-1389 in TP-Link Archer AX21 firmware, which allows for unauthenticated command execution, and CVE-2024-36401 in GeoServer, a vulnerability that can lead to remote code execution.

The botnet now also uses brute-force credential stuffing, command injection, and file inclusion techniques to compromise systems. By leveraging Mozi’s IoT-focused tactics, it has significantly widened its geographical impact, spreading its infections across regions in Asia, Europe, and beyond.

CloudSEK recommends that organizations strengthen their security posture to mitigate potential attacks. While immediate patching is essential, proactive monitoring of network traffic is also important. By tracking suspicious outbound connections and detecting anomalous login attempts, particularly from IoT devices, organizations can spot early signs of an Androxgh0st-Mozi collaboration.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/EvcNFdD

TrueNAS device vulnerabilities exposed during hacking competition

  • TrueNAS recommends hardening systems to mitigate risks
  • Pwn2Own showcases diverse attack vectors on NAS systems
  • Cybersecurity teams earn over $1 million by finding in exploits

At the recent Pwn2Own Ireland 2024 event, security researchers identified vulnerabilities in various high-use devices, including network-attached storage NAS devices, cameras, and other connected products.

TrueNAS was one of the companies whose products were successfully targeted during the event, with vulnerabilities found in its products with default, non-hardened configurations.

Following the competition, TrueNAS have started implementing updates to secure their products against these newly discovered vulnerabilities.

Security gaps across multiple devices

During the competition, multiple teams successfully exploited TrueNAS Mini X devices, demonstrating the potential for attackers to leverage interconnected vulnerabilities between different network devices. Notably, the Viettel Cyber Security team earned $50,000 and 10 Master of Pwn points by chaining SQL injection and authentication bypass vulnerabilities from a QNAP router to the TrueNAS device.

Furthermore, the Computest Sector 7 team also executed a successful attack by exploiting both a QNAP router and a TrueNAS Mini X using four vulnerabilities. The types of vulnerabilities included command injection, SQL injection, authentication bypass, improper certificate validation, and hardcoded cryptographic keys.

TrueNAS responded to the results by releasing an advisory for its users, acknowledging the vulnerabilities and emphasizing the importance of following security recommendations to protect data storage systems against potential exploits.

By adhering to these guidelines, users can increase their defences, making it harder for attackers to leverage known vulnerabilities.

TrueNAS informed customers that the vulnerabilities affected default, non-hardened installations, meaning that users who follow recommended security practices are already at a reduced risk.

TrueNAS has advised all users to review its security guidance and implement best practices, which can significantly minimize exposure to potential threats until the patches are fully rolled out.

Via SecurityWeek

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/kb2YD1p

Could this be Dell's fastest laptop ever built? Dell Pro Max 18 Plus set to have 'RTX 5000 class' GPU capabilities and Tandem OLED display


  • First look at Dell Pro Max 18 Plus emerges in new images
  • Pictures show a completely redesigned mobile workstation laptop
  • Pro Max could either replace popular Precision range or be a whole new range, offering up to 256GB RAM and up to 16TB SSD

Leaked details have suggest Dell is developing a new addition to its workstation offerings designed to deliver high-performance capabilities for professional workloads.

Available in two sizes, the Dell Pro Max 18 Plus is expected to debut officially at CES 2025 and could either replace the popular Precision range or form an entirely new lineup.

The device allegedly features an 18-inch display, while the Pro Max 16 Plus provides a smaller 16-inch alternative with similar specifications. According to information shared by Song1118 on Weibo, which includes Dell marketing slides, the laptops will be powered by Intel’s upcoming Core Ultra 200HX “Arrow Lake-HX” CPUs. For graphics, the series will reportedly feature Nvidia’s Ada-based RTX 5000-class workstation GPUs, though the exact model isn’t named in the leaked documents.

Triple-fan cooling system

The Pro Max series is set to offer up to 200 watts for the CPU/GPU combination in the 18-inch version and 170 watts in the 16-inch model. VideoCardz notes that while we have already seen much higher targets in ultra-high-end gaming machines, “this would be the first laptop confirmed to offer 200W for a next-gen Intel/Nvidia combo.”

The laptops will reportedly support up to 256GB of CAMM2 memory. The 18-inch model can accommodate up to 16TB of storage via four M.2 2280 SSD slots, while the 16-inch version supports 12TB with three slots. The heat generated by these high-power components will be managed by an “industry first” triple-fan cooling system.

Additional features look to include a magnesium alloy body to reduce weight, an 8MP camera, and a tandem OLED display option. Connectivity options include Thunderbolt 5 (80/120Gbps), WiFi 7, Bluetooth 5.4, and optional 5G WWAN. The two laptops also feature a quick-access bottom cover for easy serviceability and repairability of key components like batteries, memory, and storage.

The Dell Pro Max 16/18 Plus laptops are expected to be officially unveiled along with pricing at CES on January 7, 2025, with a mid-2025 release window.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/Gka3U1m

Saturday, December 21, 2024

This new compact mini PC can support Intel 12th to 14th Gen processors and up to 96 GB DDR5 RAM

  • Shuttle XH610G2 offers compact design supporting Intel Core processors up to 24 cores
  • Exclusive heat pipe technology ensures reliable operation in demanding environments
  • Flexible storage options include M.2 slots and SATA interfaces

Shuttle has released its latest mini PC, aimed at meeting the diverse demands of modern commercial tasks.

With a small 5-liter chassis and a compact design measuring just 250mm x 200mm x 95mm, the Shuttle XH610G2 employs the Intel H610 chipset, making it compatible with a broad spectrum of Intel Core processors, from the latest 14th Gen models back to the 12th Gen series.

The company says the device is designed to handle applications that require significant computational power like image recognition, 3D video creation, and AI data processing.

Shuttle XH610G2

The Shuttle XH610G2 comes with an exclusive heat pipe cooling technology which allows the workstation to operate reliably even in demanding environments, being capable of withstanding temperatures from 0 to 50 degrees Celsius, making it suitable for continuous operation in various commercial settings.

The Shuttle XH610G2 can accommodate Intel Core models with up to 24 cores and a peak clock speed of 5.8GHz. This processing power allows the workstation to handle intensive tasks while staying within a 65W thermal design power (TDP) limit. The graphics are enhanced by the integrated Intel UHD graphics with Xe architecture, offering capabilities to manage demanding visual applications, from high-quality media playback to 4K triple-display setups. The inclusion of dual HDMI 2.0b ports and a DisplayPort output facilitates independent 4K display support.

The XH610G2 offers extensive customization and scalability with support for dual PCIe slots, one x16 and one x1, allowing users to install discrete graphics cards or other high-performance components like video capture cards.

For memory, the XH610G2 supports up to 64GB of DDR5-5600 SO-DIMM memory, split across two slots, making ideal for resource-intensive applications, providing the system with the necessary power to handle complex computational tasks efficiently. Running at a low 1.1V, this memory configuration also minimizes energy consumption, which can be a significant advantage in environments conscious of power usage.

In terms of storage, this device features a SATA 6.0Gb/s interface for a 2.5-inch SSD or HDD, along with two M.2 slots for NVMe and SATA storage options. Users are recommended to choose a SATA SSD over a traditional HDD to ensure faster performance.

The I/O options on the XH610G2 further enhance its flexibility, with four USB 3.2 Gen 1 ports, two Ethernet ports, one supporting 1GbE and another 2.5GbE, and an optional RS232 COM port offering enhanced compatibility for specialized peripheral connections, which can be particularly useful in industrial or legacy environments.

Furthermore, the compact chassis includes M.2 expansion slots for both WLAN and LTE adapters, providing options for wireless connectivity that can be critical in setups where wired connections are not feasible.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/oQXtzAa

CAMM2 memory modules promise significant advancements in memory technology with impressive read and write speeds

  • TeamGroup claims CAMM2 memory promises high-speed DDR5 performance
  • Revolutionary design offers dual-channel operation in a single module
  • Limited motherboard compatibility poses challenges for CAMM2 adoption

TeamGroup has introduced its Compression Attached Memory Module 2 (CAMM2), promising high-speed DDR5 performance with its new T-Create lineup.

The company says CAMM2 features a revolutionary design that offers significant advantages over traditional memory types like SO-DIMM, U-DIMM, and R-DIMM. It supports dual-channel operation with just one module, streamlining system architecture and lowering power consumption.

The built-in Client Clock Driver (CKD) boosts signal integrity, making CAMM2 well-suited for slim notebooks while its optimized thermal design enhances heat dissipation, allowing higher performance despite the smaller form factor.

CAMM2-compatible motherboards are very scarce

The T-Create CAMM2 modules are designed with DDR5-7200 specifications and a CAS latency of CL34-42-42-84, delivering remarkable read, write, and copy speeds of up to 117GB/s, 108GB/s, and 106GB/s, respectively.

This performance is achieved through manual overclocking, which has driven latency down to 55ns, a significant reduction compared to typical DDR5 JEDEC specifications. TeamGroup is now focused on pushing boundaries and the company says it is working to achieve even faster speeds, aiming to reach DDR5-8000 and even DDR5-9000 in future iterations.

One major setback for TeamGroup lies in the availability of CAMM2-compatible motherboards, which are currently limited. The T-Create CAMM2 memory was tested on MSI’s Z790 Project Zero, one of the few boards currently compatible with this new form factor.

Other brands, such as Gigabyte, hint at possible CAMM2-enabled designs, like an upcoming TACHYON board. However, the CAMM2 ecosystem is still emerging, and widespread adoption may depend on the release of more compatible boards and competitive pricing.

Nevertheless, TeamGroup expects to launch the first-generation T-Create CAMM2 modules by Q1 2025, with broader motherboard support potentially arriving as manufacturers introduce new CPU platforms. With AMD and Intel rumoured to announce budget-friendly CPUs at CES 2025, the rollout of mid-range boards compatible with CAMM2 could align with TeamGroup’s release plans, potentially helping CAMM2 secure a foothold in the market.

CAMM2 offers a couple of advantages over the widely used SO-DIMM, UDIMM, and RDIMM standards. Notably, CAMM2 modules operate in dual-channel mode while only occupying a single physical slot. Furthermore, they incorporate a Client Clock Driver (CKD), similar to CUDIMM memory, which bolsters signal integrity at high speeds, allowing for more reliable and faster memory performance.

These features make CAMM2 particularly appealing for laptops, which often face limitations with current SO-DIMM speeds or non-upgradeable LPDDR5/5X options.

Via Tom's Hardware

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/jS413Wy

We may have to wait longer for the OnePlus Open 2 than we thought


  • We might not see the OnePlus Open 2 until later in 2025
  • Previous leaks predicted a Q1 2025 launch
  • Major upgrades have been rumored for the foldable

A quick browse through our OnePlus Open review will tell you why we're very much looking forward to the foldable phone's successor – though if a new leak is to be believed, the wait for the OnePlus Open 2 might be longer than originally thought.

According to tipster Sanju Choudhary (via GSMArena), the handset is going to break cover during the second half of next year – anytime from July onwards. That contradicts an earlier rumor that it would be unveiled in the first three months of 2025.

There's no indication whether or not OnePlus has changed its plans, or if the launch date was originally set for the first quarter of next year and has since been pushed back (engineering foldable phones is a tricky challenge, after all).

It's also fair to say that none of these rumors can be confirmed until OnePlus actually makes its announcement. The original OnePlus Open was launched in October 2023, which doesn't really tell us much about a schedule for its successor.

Upgrades on the way

Whenever the next OnePlus folding phone shows up, it sounds like it's going to be worth the wait – which has lasted 14 months and counting. Rumors have pointed to major upgrades in terms of the rear camera and the internal components.

We've also heard that the OnePlus Open 2 will have the biggest battery ever seen in a foldable, as well as being thinner and more waterproof than the handset it's replacing. That's a significant number of improvements to look forward to.

In our OnePlus Open review, we described the phone as "the only foldable phone that doesn't compromise", and there was particular praise for the design and the camera setup – so the upcoming upgrade has a lot to live up to.

Before we see another foldable from OnePlus, we'll see the OnePlus 13 and the OnePlus 13R made available worldwide: OnePlus has confirmed this is happening on January 7, so we could also get a teaser for the OnePlus Open 2 at the same time.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/2P0qQkI

Friday, December 20, 2024

Only 15% of Steam users have played games released in 2024, but why?

  • 15% of Steam users' playtime dedicated to 2024 games
  • 47% of playtime on games up to eight years old
  • Many reasons for this, including more older games to play

Steam’s end-of-the-year review has always revealed some fascinating PC gaming trends and this year’s is no exception. According to 2024’s stats, only 15% of Steam users spent their total playing time on games that launched in 2024.

Looking further at the data that PC Gamer reports on, 47% of the total playing time on Steam was spent on games released in the last seven years, while 37% of that time was spent on games that launched eight years or more ago. Now the question is, why and what does this mean?

One possible explanation is that gamers could be focusing more on their backlogs rather than new releases. We do know that playtime for current releases is higher this year than in 2023, as there was an increase from 9% to 15%, which means players are buying new titles at least. There are other possibilities for this trend as well.

Other possibilities for this statistic

One reason could be that older games are easier to access due to their cheaper prices, especially due to the many Steam sales. There’s also the influence of the Steam Deck and what’s considered ‘Steam Deck playable,’ since many recent AAA games may be too demanding for a portable PC.

There’s also the fact that older live service games like Counter-Strike, Dota 2, and PUBG have made up Steam's Most Played charts, while newer titles have an incredibly difficult time breaking through and building a player base.

Another reason is that Steam has over 200,000 titles released over the course of decades, compared to the relatively paltry 18,000 games released in 2024 according to SteamDB. So naturally, more users will spend more time playing older games versus recent ones.

Regardless, 15% of playtime dedicated to new games is rather impressive, compared to 2022’s 17% stat. It means that the numbers are recovering after the massive dip in 2023. Hopefully next year we’ll see another increase, as gamers delve into more new titles.

You might also like...



from Latest from TechRadar US in News,opinion https://ift.tt/gnqc8NF

12 Days of OpenAI ends with a new model for the new year


  • OpenAI announced upcoming o3 and o3-mini AI models.
  • The new models are enhanced "reasoning" AI models that build on the o1 and o1-mini models released this year.
  • Both models handily outperform existing AI models and will roll out in the next few months.

The final day of the 12 Days of OpenAI, brought back OpenAI CEO Sam Altman to show off a brand new set of AI models coming in the new year. The o3 and o3-mini models are enhanced versions of the relatively new o1 and o1-mini models. They're designed to think before they speak, reasoning out their answers. The mini version is smaller and aimed more at carrying out a limited set of specific tasks but with the same approach.

OpenAI is calling it a big step toward artificial general intelligence (AGI), which is a pretty bold claim for what is, in some ways, a mild improvement to an already powerful model. You might have noticed there's a number missing between the current o1 and the upcoming o3 model. According to Altman, that's because OpenAI wants to avoid any confusion with British telecom company O2.

So, what makes o3 special? Unlike regular AI models that spit out answers quickly, o3 takes a beat to reason things out. This “private chain of thought” lets the model fact-check itself before responding, which helps it avoid some of the classic AI pitfalls, like confidently spewing out wrong answers. This extra thinking time can make o3 slower, even if only a little bit, but the payoff is better accuracy, especially in areas like math, science, and coding.

One great aspect of the new models is that you can adjust that extra thinking time manually. If you’re in a hurry, you can set it to “low compute” for quick responses. But if you want top-notch reasoning, crank it up to “high compute” and give it a little more time to mull things over. In tests, o3 has easily outstripped its predecessor.

This is not quite AGI; o3 can't take over for humans in every way. It also does not reach OpenAI's definition of AGI, which describes models that outperform humans in the most economically valuable projects. Still, should OpenAI reach that goal, things get interesting for its partnership with Microsoft since that would end OpenAI's obligation to give Microsoft exclusive access to the most advanced AI models.

New year, new models

Right now, o3 and its mini counterpart aren’t available to everyone. OpenAI is giving safety researchers a sneak peek via Copilot Labs, and the rest of us can expect the o3-mini model to drop in late January, with the full o3 following soon after. It’s a careful, measured rollout, which makes sense given the kind of power and complexity we’re talking about here.

Still, o3 gives us a glimpse of where things are headed: AI that doesn’t just generate content but actually thinks through problems. Whether it gets us to AGI or not, it’s clear that smarter, reasoning-driven AI is the next frontier. For now, we’ll just have to wait and see if o3 lives up to the hype or if this last gift from OpenAI is just a disguised lump of coal.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/6TFN98d

Google Chrome is testing a new AI tool that scans for scams to help save you from online trickery


  • Google Chrome is testing a new AI-powered scam detection feature
  • It seemingly uses an on-device Large Language Model (LLM) to maintain user privacy
  • AI-driven safety tools, including scam detection, help to fight the rise of AI-powered threats online

The world’s most popular browser, Google Chrome, is experimenting with a new AI-powered tool designed to help you avoid online scams.

The feature is currently being tested and apparently uses AI tech, specifically a Large Language Model (LLM) on the device, to analyze web pages and determine if they seem suspicious or scam-related.

This development was spotted by Leopeva64 on X, who regularly highlights web browser features which are being tested. What they actually discovered was a flag that can be enabled called ‘Client Side Detection Brand and Intent for Scam Detection,’ which is present in the latest version of Chrome’s experimental browser, Canary.

The new flag leverages an on-device LLM to investigate the content of any given web page and figure out what it's trying to do, and whether that content falls in line with the website’s supposed purpose or brand.

This is explained in the flag’s description, which reads: “Enables on-device LLM (large language model) output on pages to inquire for brand and intent of the page.”

A scammer working on a laptop

(Image credit: Robinraj Premchand from Pixabay)

On device is key to privacy

One key detail about this process is that it uses an on-device LLM, which means that the analysis of web pages happens on your device (as opposed to in the cloud somewhere, which would involve sending your browsing data to a third-party). In short, this means your data will stay private.

To try this feature out, you would have to install the latest Google Chrome Canary release, which is not something I’d generally recommend, unless you’re really keen (if so, you can follow Neowin’s advice on how to enable the new flag).

This is the latest in a series of AI-powered tools coming to Chrome, which also includes a ‘Store reviews’ feature that’s currently in testing. This capability uses AI to summarize reviews from platforms like Trustpilot or ScamAdvisor, helping users quickly check if an online store is reliable.

As ever, we don’t know if features in testing will make it through to release, but it’s likely these will - Google is keen on building out AI powers for its browser, and I expect we’ll see this scam warning system rolled out before too long. Unless the Google Chrome team finds some good reasons to go back to the drawing board.

Even though Chrome is the dominant web browser by a long way, Google shouldn’t rest on its laurels, and I think it’s very savvy of the company to keep improving its browser to stay in pole position. And with scammers and hostile actors now having AI-powered tools at their disposal, it’s good to see Google (hopefully) bringing LLMs in to help defend Chrome users from the unwanted attention of these nefarious types.

YOU MIGHT ALSO LIKE...



from Latest from TechRadar US in News,opinion https://ift.tt/l5U2H0Q

Thursday, December 19, 2024

Microsoft Teams and AnyDesk abused to deploy dangerous malware, so be on your guard


  • Criminals are reaching out to victims, offering to help with a "problem"
  • To fix the issue, they request AnyDesk access
  • If they get it, they drop the DarkGate malware and steal sensitive data

Cybercriminals are combining Microsoft Teams and AnyDesk to try and install a dangerous piece of malware on their target’s devices, experts have warned.

A report from Trend Micro, which claims to have recently observed one such attack in the wild, notes how the attackers would first send thousands of spam emails to their targets, and then reach out via Microsoft Teams, impersonating an employee of an external supplier.

Offering help with the problem, the attackers would instruct the victim to install a Microsoft Remote Support application. If that failed, they would try the same with AnyDesk. If successful, the attackers would use the access to deliver multiple payloads, including a piece of malware called DarkGate.

Abusing legitimate tools

DarkGate is a highly versatile malware that can act as a backdoor on infected systems, allowing attackers to execute commands remotely. It can install additional payloads, and exfiltrate sensitive data without being detected. Data of high value includes login credentials, personally identifiable information, or data on clients, customers, and business partners.

One of its notable features is its modular design, allowing attackers to modify the malware’s functionality. So, in one scenario it can act as an infostealer, and in another, as a dropper.

The attack was blocked before doing any meaningful damage, but the researchers used it as an opportunity to warn businesses of the constant threat that lurks on the internet.

Organizations need to train their employees to spot phishing and social engineering attacks, deploy multi-factor authentication (MFA) wherever possible, and put as much of their infrastructure behind a VPN as possible. Furthermore, they should keep both software and hardware up to date, and keep in mind end-of-life dates for critical equipment.

Ultimately, they should use common sense and not fall for obvious scam attempts that are running rampant on the internet.

Via The Hacker News

You might also like



from TechRadar - All the latest technology news https://ift.tt/rBtlPCi

Fake DocuSign and HubSpot phishing emails target 20,000 Microsoft Azure accounts


  • Unit 42 says phishing campaign targeted automotive, chemical, and industrial compound manufacturing industries
  • More than 20,000 victims were successfully targeted
  • The campaign has been disrupted, but users should still be on their guard

Hackers of potentially Russian or Ukrainian origin have been targeting UK and EU organizations in the automotive, chemical, and industrial compound manufacturing industries with advanced phishing threats, experts have warned.

A report from Unit 42, Palo Alto Networks’ cybersecurity arm, claims to have observed a campaign that started in June 2024, and was still active as of September. The goal of the campaign was to grab people’s Microsoft Azure cloud accounts, and steal any sensitive information found there.

The crooks would either send a Docusign-enabled PDF file, or an embedded HTML link, which would redirect the victims to a HubSpot Free Form Builder link. That link would usually invite the reader to “View Document on Microsoft Secured Cloud,” where the victims would be asked to provide their Microsoft Azure login credentials.

Bulletproof hosting

The majority of the victims are located in Europe (mostly Germany), and the UK. Roughly 20,000 users were “successfully targeted”, the researchers said, adding that at least in a few cases, the victims provided the attackers with login credentials: "We verified that the phishing campaign did make several attempts to connect to the victims' Microsoft Azure cloud infrastructure," the researchers said in their writeup.

Besides using custom phishing lures, with organization-specific branding and email formats, the crooks also went for targeted redirections using URLs designed to look like the victim organization’s domain. Furthermore, the miscreants used bulletproof VPS hosts, and reused their phishing infrastructure for multiple operations. Most of the phishing pages were hosted on .buzz domains.

At press time, most of the attack infrastructure was pulled offline - Unit 42 said it worked together with HubSpot to address the abuse of the platform, and engaged with compromised organizations to provide recovery resources. Since most phishing servers are now offline, the researchers said the disruption efforts were effective.

Via The Register

You might also like



from TechRadar - All the latest technology news https://ift.tt/D5ZrmIB

Apple set to build a server chip to service its own AI and may have sacrificed the company's fastest ever chip to achieve this; report suggests a strategic tie-in with $850bn Broadcom


  • Apple developing "Baltra" server chip for AI, targeting 2026 production
  • Israeli silicon team leading project; Mac chip canceled for focus
  • Broadcom collaboration and TSMC’s N3P tech to enhance development

Apple is reportedly developing its first server chip tailored specifically for artificial intelligence.

A paywalled report by Wayne Ma and Qianer Liu in The Information claims the project, codenamed “Baltra,” aims to address the growing computational demands of AI-driven features and is expected to enter mass production by 2026.

Apple’s silicon design team in Israel, which was responsible for designing the processors that replaced Intel chips in Macs in 2020, is now leading the development of the AI processor, according to sources. To support this effort, Apple has reportedly canceled the development of a high-performance Mac chip made up of four smaller chips stitched together.

Central to Apple’s efforts

The report notes this decision, made over the summer, is intended to free up engineers in Israel to focus on Baltra, signaling Apple’s shift in priorities toward AI hardware.

Apple is working with semiconductor giant Broadcom on this project, using the company’s advanced networking technologies needed for AI processing. While Apple usually designs its chips in-house, Broadcom’s role is expected to focus on networking solutions, marking a new direction in their partnership.

To make the AI chip, The Information says Apple plans to use TSMC’s advanced N3P process, an upgrade from the technology behind its latest processors, like the M4. This move highlights Apple’s focus on enhancing performance and efficiency in its chip designs.

The Baltra chip is expected to drive Apple’s efforts to integrate AI more deeply into its ecosystem. By leveraging Broadcom’s networking expertise and TSMC's advanced manufacturing techniques, Apple appears determined to catch up to rivals in the AI space and establish a stronger presence in the industry.

In November 2024, we reported that Apple approached its long-time manufacturing partner Foxconn to build AI servers in Taiwan. These servers, using Apple’s M-series chips, are intended to support Apple Intelligence features in iPhones, iPads, and MacBooks.

You might also like



from TechRadar - All the latest technology news https://ift.tt/QrSlbJF

Wednesday, December 18, 2024

Microsoft’s PC accessories are back, this time with a stylish helping hand from Incase

  • Microsoft revives PC accessories with Incase, rebranding classics like the Modern Mobile Mouse and keyboards.
  • Affordable, ergonomic, and wireless options headline the new lineup, starting with 23 versatile accessories.
  • Surface remains Microsoft's premium gear focus, blending cutting-edge tech with its signature Surface branding.

Some people might not know that until pretty recently, Microsoft made computer accessories - and it looks like Microsoft is dipping its toe in again. The company actually has a considerable history of creating PC accessories, from ergonomic keyboards to high-precision mice. After discontinuing its own brand of PC accessories last year, Microsoft has partnered with Incase to bring back some of those back.

Incase put out a post announcing the partnership starting in 2024, promising to combine both companies’ expertise to bring you 23 computer accessories to start with and possibly more to come. You can get products that some might recognize, such as the Modern Mobile Mouse or Sculpt Ergonomic Keyboard, but now with the Incase logo and branding.

One of the Incase keyboards that was designed in collaboration with Microsoft next to other everyday objects

(Image credit: Incase)

What Incase and Microsoft have to offer

In practical terms, these accessories will work just as well as the originals and they come at great prices that won’t make you jump out of your seat. For example, the $24.99 Mobile Mouse 1850 is a lightweight, reliable wireless mouse that’s perfect for everyday tasks, while the $39.99 Modern Mobile Mouse offers a sleeker design with better performance for on-the-go professionals. This new lineup also includes keyboards that are wireless, ergonomic, and compact, along with headsets and a webcam.

While Microsoft has pretty much entirely left the PC accessory market, its Surface range includes Surface-specific gear, like the Surface Desktop Keyboard with its AI-powered Copilot+ key, which shows off Microsoft’s commitment to its premium Surface lineup. So, whether you’re looking for dependable classics under the new “Incase Designed by Microsoft” label or cutting-edge tech under the Surface brand, Microsoft has something for everyone.

Those who are familiar with Microsoft’s computer accessories will probably welcome this announcement. While some have complaints about products like Microsoft 365, Edge, and, of course, Windows, it is still a highly trusted company, and with Incase’s collaboration efforts, I think these will be pretty decent quality for the price.

YOU MIGHT ALSO LIKE...



from TechRadar - All the latest technology news https://ift.tt/FPWUO28

Huawei looks set to launch a new server chip with HBM technology to challenge Xeon and Epyc; yes, that's the same memory powering AI GPUs from Nvidia and AMD


  • Huawei may be adding HBM support to Kunpeng SoC
  • Clues hint at a replacement for the Kunpeng 920, launched in 2019
  • New SoC with HBM may target HPC, server market rivals

Huawei engineers have reportedly released new Linux patches to enable driver support for High Bandwidth Memory (HBM) management on the company’s ARM-based Kunpeng high-performance SoC.

The Kunpeng 920, which debuted in January 2019 as the company’s first server CPU, is a 7nm processor featuring up to 64 cores based on the Armv8.2 architecture. It supports eight DDR4 memory channels and has a thermal design power (TDP) of up to 180W. While these specifications were competitive when first introduced, things have moved on significantly since.

Introducing a new Kunpeng SoC with integrated HBM would align with industry trends as companies seek to boost memory bandwidth and performance in response to increasingly demanding workloads. It could also signal Huawei’s efforts to maintain competitiveness in the HPC and server markets dominated by Intel Xeon and AMD EPYC.

No official announcement... yet

Phoronix’s Michael Larabel notes that Huawei has not yet formally announced a new Kunpeng SoC (with or without HBM), and references to it are sparse. Kernel patches, however, have previously indicated work on integrating HBM into the platform.

The latest patches specifically address power control for HBM devices on the Kunpeng SoC, introducing the ability to power on or off HBM caches depending on workload requirements.

The patch series includes detailed descriptions of this functionality. Huawei explains that HBM offers higher bandwidth but consumes more power. The proposed drivers will allow users to manage HBM power consumption, optimizing energy use for workloads that do not require high memory bandwidth.

The patches also introduce a driver for HBM cache, enabling user-space control over this feature. By using HBM as a cache, operating systems can leverage its bandwidth benefits without needing direct awareness of the cache’s presence. When workloads are less demanding, the cache can be powered down to save energy.

While we don't have any concrete details on future Kunpeng SoCs, integrating HBM could potentially allow them compete more effectively against other ARM-based server processors, as well as Intel’s latest Xeon and AMD EPYC offerings.

You might also like



from TechRadar - All the latest technology news https://ift.tt/bl5TkEi

Tuesday, December 17, 2024

Slim-Llama is an LLM ASIC processor that can tackle 3-bllion parameters while sipping only 4.69mW - and we'll find out more on this potential AI game changer very soon


  • Slim-Llama reduces power needs using binary/ternary quantization
  • Achieves 4.59x efficiency boost, consuming 4.69–82.07mW at scale
  • Supports 3B-parameter models with 489ms latency, enabling efficiency

Traditional large language models (LLMs) often suffer from excessive power demands due to frequent external memory access - however researchers at the Korea Advanced Institute of Science and Technology (KAIST), have now developed Slim-Llama, an ASIC designed to address this issue through clever quantization and data management.

Slim-Llama employs binary/ternary quantization which reduces the precision of model weights to just 1 or 2 bits, significantly lowering the computational and memory requirements.

To further improve efficiency, it integrates a Sparsity-aware Look-up Table, improving sparse data handling and reducing unnecessary computations. The design also incorporates an output reuse scheme and index vector reordering, minimizing redundant operations and improving data flow efficiency.

Reduced dependency on external memory

According to the team, the technology demonstrates a 4.59x improvement in benchmark energy efficiency compared to previous state-of-the-art solutions.

Slim-Llama achieves system power consumption as low as 4.69mW at 25MHz and scales to 82.07mW at 200MHz, maintaining impressive energy efficiency even at higher frequencies. It is capable of delivering peak performance of up to 4.92 TOPS at 1.31 TOPS/W, further showcasing its efficiency.

The chip features a total die area of 20.25mm², utilizing Samsung’s 28nm CMOS technology. With 500KB of on-chip SRAM, Slim-Llama reduces dependency on external memory, significantly cutting energy costs associated with data movement. The system supports external bandwidth of 1.6GB/s at 200MHz, promising smooth data handling.

Slim-Llama supports models like Llama 1bit and Llama 1.5bit, with up to 3 billion parameters, and KAIST says it delivers benchmark performance that meets the demands of modern AI applications. With a latency of 489ms for the Llama 1bit model, Slim-Llama demonstrates both efficiency and performance, and making it the first ASIC to run billion-parameter models with such low power consumption.

Although it's early days, this breakthrough in energy-efficient computing could potentially pave the way for more sustainable and accessible AI hardware solutions, catering to the growing demand for efficient LLM deployment. The KAIST team is set to reveal more about Slim-Llama at the 2025 IEEE International Solid-State Circuits Conference in San Francisco on Wednesday, February 19.

You might also like



from TechRadar - All the latest technology news https://ift.tt/2KARpGY

Monday, December 16, 2024

ChatGPT brings its conversational search engine to everyone

Day eight of the 12 Days of OpenAI was shorter than the previous days by several minutes, but the brevity fits with the ChatGPT Search news OpenAI CPO Kevin Weil and his team unveiled. Unlike the Projects feature unveiled on Friday, most people using the Internet understand the concept of searching for things online.

Still, it wasn't without some exciting news for ChatGPT users, especially those not paying for a subscription. Only ChatGPT Plus subscribers had access to the search feature when it launched as a beta a few months ago, but now it's universally accessible if you log in to your account.

ChatGPT Search

(Image credit: Future)

And it's not just the same ChatGPT Search that subscribers have played with until now. OpenAI claims the search performs better and more accurately than before. And, when you ask a question, the AI will decide if it needs to pull fresh data from the web or answer based on what it already knows. You get results with web previews, images, and even videos that play right in the chat, which might put an end to tab-hopping.

ChatGPT mobile app users will also notice that the search feature integrates more smoothly into Android and iOS. The iOS version even links with Apple Maps to provide directions. Furthermore, ChatGPT Search now works with voice mode on the mobile app, so you can get the AI to search online without typing.

ChatGPT Search Integrated into Mobile App

(Image credit: Future)

Search AI

Say, for instance, you’re in the mood for sushi. You can use ChatGPT as your local guide and ask, “Where’s a good sushi spot nearby?” ChatGPT will give you options, complete with photos, links, and directions, linking to Apple Maps on iOS. Because ChatGPT looks up recent information online, it can even work for seasonal outlets.

Ask, “What time does the Christmas market close?” and ChatGPT will fetch up-to-date hours and details so you’re not left out in the cold. Or if you're wrapped in a blanket on the couch, you can ask, “What’s a good comedy movie on Netflix?” and even watch the trailer directly in the chat.

Thanks to the voice mode connection, the search can be done hands-free. So, if you’re hands-deep in a cooking project and need a quick recipe or measurement conversion, just ask ChatGPT out loud. It’ll give you answers while you stir the pot.

ChatGPT Search may not immediately replace the classic search engines, but its conversational style brings something fresh. Of course, OpenAI isn't the only one pursuing AI-powered search. That's the main use of tools like Perplexity, which Claude and, naturally, Google Gemini have variations on as well.

Still, ChatGPT Search is a solid addition to the 12 Days of OpenAI, which promises a more developer-focused announcement tomorrow. We will see if the company can search for anything more exciting to close out the rest of the event this week.

You might also like



from TechRadar - All the latest technology news https://ift.tt/TcsrAny

BADBOX malware hits 30,000 Android devices - make sure you update now


  • BADBOX most likely originates from China
  • The malware can run ad fraud, residential proxies, and more malicious activity
  • The network was recently disrupted by German authorities

German authorities have managed to disrupt a major malware operation that affected thousands of Android devices across the country.

The Federal Office of Information Security (BSI) said BADBOX came preloaded on Android devices with older firmware, which were essentially sold as infected.

Some 30,000 devices across the country were compromised, the agency added, with digital picture frames, media players, and streaming devices being the most common endpoints - however, some smartphones and tablet devices were possibly infected as well.

Outdated Android devices

"What all of these devices have in common is that they have outdated Android versions and were delivered with pre-installed malware," the BSI said in a press release.

The agency outlined how BADBOX was capable of carrying out a number of malicious activities.

Mostly, it was built to silently create new accounts for email and message services, which were later used to spread fake news, misinformation, and propaganda, but BADBOX was also designed to open websites in the background, which would count as ad views - a practice generally perceived as ad fraud.

Furthemore, the malware was able to act as a residential proxy service, lending the traffic to malicious third parties for different illegal activities. Finally, BADBOX can be used as a loader, as well, dropping additional malware on the devices.

The operation was reportedly first documented by HUMAN’s Satori Threat Intelligence more than a year ago, and that it most likely originates from China. The same threat actors allegedly operate an ad fraud botnet called PEACHPIT, as well, designed to spoof popular Android and iOS apps, and its own traffic from the BADBOX network.

"This complete loop of ad fraud means they were making money from the fake ad impressions on their own fraudulent, spoofed apps," HUMAN said at the time. "Anyone can accidentally buy a BADBOX device online without ever knowing it was fake, plugging it in, and unknowingly opening this backdoor malware."

Via The Hacker News

You might also like



from TechRadar - All the latest technology news https://ift.tt/VwY91gj

Uh-oh; Plucky French startup takes on Apple with cheap upgradable storage for the Mac Studio, but installing it will definitely void your warranty


  • Polysoft offers SSD upgrades for Mac Studio at significantly lower prices
  • StudioDrive features overvoltage protection and durable components
  • Offered in 2TB, 4TB, and 8TB capacities, shipping next year

Apple introduced the Mac Studio in 2022 with the M1 chip, followed by the M2 model in 2023, and although these compact powerhouses have been lauded for their performance, buyers have rightly expressed concerns about the limited base SSD configurations and the absence of post-purchase upgrade options.

External USB-C or Thunderbolt SSDs are a common workaround for users seeking additional storage, but they don't match the speed and convenience of internal storage solutions.

Stepping in to address this gap, French company Polysoft has created the first publicly available SSD upgrade solution for Apple Silicon devices. Offered at a fraction of Apple’s prices, these SSD modules are the result of an extensive reverse-engineering process.

Better than Apple

Unlike SSDs used in PCs, Apple’s storage modules are challenging to replicate due to their integration with the M1 and M2 chips, where the storage controller resides.

Polysoft’s efforts included detailed disassembly, component analysis, and redesign, culminating in the StudioDrive SSD which is set to launch next year following a successful Kickstarter campaign.

Polysoft claims its SSDs not only replicate Apple’s modules but also improve on them.

A key difference is the inclusion of "RIROP" (Rossmann Is Right Overvoltage Protection), a safeguard inspired by Louis Rossmann’s work on hardware reliability. This feature reportedly protects against voltage surges, reducing the risk of catastrophic data loss due to hardware failure.

The StudioDrive product line supports both M1 and M2 Mac Studio models. It includes blank boards for enthusiasts and pre-configured options in 2TB, 4TB, and 8TB capacities. Polysoft says that the modules use high-quality Kioxia and Hynix TLC NANDs, offering performance and durability comparable to Apple’s original storage solutions. The drives are backed by a five-year warranty and have a lifespan of up to 14,000 TBW.

Pricing starts at €399 ($419) for 2TB, €799 ($839) for 4TB, and €1,099 ($1,155) for 8TB. While these upgrades will no doubt be viewed as an affordable, and welcome solution by many Mac Studio owners, users should be aware that installing third-party storage will void Apple’s warranty.

You might also like



from TechRadar - All the latest technology news https://ift.tt/3MSazOl

Sunday, December 15, 2024

Ransomware defenses are being weakened by outdated backup technology, limited backup data encryption, and failed data backups


  • Ransomware attacks often now target backup data directly, experts warn
  • Zero Trust principles are key to data protection
  • 59% of organizations experienced ransomware attacks in 2023

Ransomware attacks have increasingly become a top concern for businesses worldwide, targeting organizations of all sizes and industries.

Recent research by Object First has highlighted key vulnerabilities and the growing importance of modern backup technologies in combating ransomware threats.

The survey revealed many businesses are still using outdated technologies that leave their backup data vulnerable to attack, suggesting they are not yet adequately prepared to fend off modern ransomware attacks.

The state of backup security

Backup data is becoming a prime target for cybercriminals, therefore organizations need to rethink their backup security practices to adopt more resilient, ransomware-proof solutions.

The report revealed while over a third (34%) of respondents pointed to outdated backup systems as a major weakness, making them easier targets for ransomware attackers, 31% cited a lack of backup data encryption, which prevents sensitive data from being securely stored and transferred.

In addition, failed data backups were identified by 28% of respondents as another key vulnerability. These failures leave organizations unable to restore their systems after an attack, often resulting in lengthy downtimes and expensive recovery processes.

More troubling is the finding that ransomware attacks are increasingly targeting backup data directly. Normally, backups are considered a last line of defense in the event of an attack. However, with attackers now focusing on compromising this data, simply having backups is no longer enough. This shift has led to a growing need for immutable storage backup systems designed to ensure data cannot be altered or deleted by ransomware once it is stored.

An overwhelming 93% of survey respondents agreed that immutable storage is essential for protecting against ransomware attacks, while 84% of IT workers highlighted that they need better backup security to meet regulatory compliance. This need for enhanced security is further evidenced by the fact that 97% of respondents plan to invest in immutable storage solutions as part of their cybersecurity strategy.

Immutable storage is built on Zero Trust principles, a security model that assumes no user or system is inherently trustworthy. This approach focuses on continuously validating every access request and limiting permissions to minimize the risk of unauthorized access.

The Object First survey found that 93% of IT professionals believe aligning their backup systems with Zero Trust principles is key to safeguarding their data from ransomware. Zero Trust architecture ensures that even if cybercriminals gain access to a system, they are limited in their ability to manipulate or delete critical data.

While the need for enhanced security is clear, the survey also revealed that managing backup storage systems remains a challenge for many organizations. Nearly 41% of IT professionals stated that they lack the skills necessary to manage complex backup solutions, and 69% reported that budget constraints prevent them from hiring additional security experts.

“Our research shows that almost half of organizations suffered attacks that targeted their backup data, highlighting the criticality of adopting backup storage solutions that are ransomware-proof,” said Andrew Wittman, Chief Marketing Officer at Object First.

You might also like



from TechRadar - All the latest technology news https://ift.tt/MZUi4KD

AWS, Azure and Google Cloud credentials from old accounts are putting businesses at risk

  • Report warns long-lived credentials remain a significant security risk
  • Outdated access keys increase vulnerability across cloud platforms
  • Automated credential management is crucial for cloud security

As cloud computing adoption continues to rise, organizations increasingly rely on platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud for their infrastructure and services, however, this means their security risks also grow more complex.

The recent Datadog State of Cloud Security 2024 report reveals one particularly concerning issue - the use of long-lived credentials, which pose significant security threats across all major cloud providers.

Despite advancements in cloud security tools and practices, many organizations still use long-lived credentials, which do not expire automatically.

The prevalence of long-lived credentials

Long-lived credentials, particularly those that are no longer actively managed, can serve as an easy target for attackers. If leaked or compromised, they could provide unauthorized access to sensitive data or systems. The longer these credentials remain in place without rotation or monitoring, the greater the risk of a security breach.

Datadog's report reveals nearly half (46%) of organizations still have unmanaged users with long-lived credentials. These credentials are particularly problematic because they are often embedded in various assets such as source code, container images, and build logs. If these credentials are not properly managed, they can easily be leaked or exposed, providing an entry point for attackers to access critical systems and data.

Almost two-thirds 62% of Google Cloud service accounts, 60% of AWS Identity and Access Management (IAM) users, and 46% of Microsoft Entra ID applications have access keys that are more than a year old.

In response to these risks, cloud providers have been making strides toward improving security. Datadog's report notes that the adoption of cloud guardrails is on the rise. These guardrails are automated rules or configurations designed to enforce security best practices and prevent human error.

For instance, 79% of Amazon S3 buckets now have either account-wide or bucket-specific public access blocks enabled, up from 73% the previous year. However, while these proactive measures are a step in the right direction, long-lived credentials remain a major blind spot in cloud security efforts.

Furthermore, the report added there is a conspicuously high number of cloud resources with overly permissive configurations.

About 18% of AWS EC2 instances and 33% of Google Cloud VMs were found to have sensitive permissions that could potentially allow an attacker to compromise the environment. In cases where a cloud workload is breached, these sensitive permissions can be exploited to steal associated credentials, enabling attackers to access the broader cloud environment.

In addition, there is the risk of third-party integrations, which are common in modern cloud environments. More than 10% of third-party integrations examined in the report were found to have risky cloud permissions, potentially allowing the vendor to access sensitive data or take control of the entire AWS account.

What's more, 2% of these third-party roles do not enforce the use of External IDs, leaving them susceptible to a "confused deputy" attack, a scenario where an attacker tricks a service into using its privileges to perform unintended actions.

“The findings from the State of Cloud Security 2024 suggest it is unrealistic to expect that long-lived credentials can be securely managed,” said Andrew Krug, Head of Security Advocacy at Datadog.

“In addition to long-lived credentials being a major risk, the report found that most cloud security incidents are caused by compromised credentials. To protect themselves, companies need to secure identities with modern authentication mechanisms, leverage short-lived credentials and actively monitor changes to APIs that attackers commonly use,” Krug added.

You might also like



from TechRadar - All the latest technology news https://ift.tt/YKEhbPD

Elon Musk’s xAI supercomputer gets 150MW power boost despite concerns over grid impact and local power stability

Elon Musk's xAI supercomputer gets power boost amid concerns 150MW approval raises questions about grid reliability in Tennessee Lo...