Dell Pro Max Plus laptop is the first to feature an enterprise-grade discrete NPU
It features a Qualcomm AI 100 PC Inference Card with 32 AI-cores and 64GB memory
This offers the opportunity to carry out high-intensity AI tasks, even on the move
Dell has unveiled an AI PC with a never-before-seen feature it hopes will spur on the next levels of productivity.
Revealed at Dell Technologies World 2025, the new Dell Pro Max Plus laptop is the first to feature an enterprise-grade discrete NPU, offering the opportunity to carry out high-intensity AI tasks even on the move.
The mobile workstation features a Qualcomm AI 100 PC Inference Card with 32 AI-cores and 64GB memory, which Dell says should be more than enough to handle the needs of AI engineers and data scientists deploying large models for edge inferencing.
“Personal productivity is being reinvented by AI,” Dell said, “the install base of a billion and half PCs is ageing, and it’s being replaced with AI innovation.”
“The Windows 10 end of life is coming, and we are ready - Dell is the leader in commercial AI PCs, and we’re further distancing ourselves from the competition.”
The CEO highlighted the new Dell Pro Max device during his keynote address, noting it would be ideal for developers and scientists, offering up to 20 petaflops of performance due to embedded Nvidia GB300 hardware, and up to 800GB of memory - enough to run and train models with a trillion parameters.
“Today’s PCs are becoming AI workstations - blazing fast, all-day battery life powered by NPU and GPU innovation," Dell declared.
Intel’s four-year-old Optane P5800X still outpaces the SN8100 in real-world speed tests
SanDisk’s new WD Black SN8100 PCIe Gen5 SSD is fast, efficient, and engineered to meet the demands of gamers and power users alike.
The drive uses a PCIe Gen5 x4 interface and is available in 1TB, 2TB, and 4TB capacities. Built around SanDisk's in-house 8-channel controller and BiCS 3D TLC NAND, it supports read speeds of up to 14.5 GB/s and write speeds up to 12.7 GB/s, placing it among the fastest Gen5 drives currently available.
However, despite the SN8100’s cutting-edge design and impressive benchmarks, Intel’s now-defunct, four-year-old Optane P5800X still holds the crown as the fastest SSD in real-world use.
Benchmarks suggest top speeds - but not across the board
In synthetic benchmarks like CrystalDiskMark and ATTO, the SN8100 breaks lab records for sequential throughput and random reads, reaching up to 2.3 million IOPS.
According to TweakTown, “this SSD is like none other; it’s at least 20% more powerful than any flash-based SSD we’ve ever encountered.”
It also demonstrates notable efficiency, consuming just 7 watts under load and requiring no active cooling, making it a serious contender for best SSD or the best portable SSD for enthusiast builds.
Still, synthetic benchmarks don’t always reflect real-world performance. In practical transfer tests, the SN8100 ranked ninth overall, indicating that while it's extremely fast, it's not without limitations, and it doesn't dethrone the Intel Optane P5800X.
Launched in 2021, the P5800X remains unmatched in real-world responsiveness and latency. While its sequential read speeds top out at 7.2 GB/s - slower than the SN8100 - its random read/write IOPS exceed 4.5 million, and latency frequently drops below 10 microseconds. That’s where it truly shines.
Flash-based SSDs like the SN8100 still rely on garbage collection and page-level management, leading to occasional latency spikes during small, random workloads. In contrast, the P5800X maintains consistent performance under heavy load, with no significant dips, a key reason why it’s still regarded as the fastest SSD ever made.
That said, the SN8100 is an impressive drive in its own right. It's a customized version of Silicon Motion’s SM2508 controller, enhanced with proprietary technologies like nCache 4.0 and WD Black Gaming Mode.
It also fits into the Sony PlayStation 5’s expansion slot, achieving read speeds of 6,550 MB/s in that setup, well above the console’s minimum requirement. However, with a price tag of $280 for the 2TB model, it clearly belongs in the premium tier.
If you’ve ever wondered how much one of these giant-capacity SSDs might set you back, the answer is: maybe not quite as much as you’d expect, as although early estimates placed its price close to $14,000, but you can actually pick up the drive from Tech-America for a much more affordable $12,399.
Obviously, this isn’t a drive for your typical PC rig - it uses a PCIe 4.0 interface and comes in U.2 (available now) and E1.L (expected later this year) form factors. It’s aimed at enterprise storage environments handling large-scale AI, machine learning, and data-intensive workloads.
Longer lasting QLC
The drive is built with 192-layer QLC NAND. With endurance rated at 0.60 drive writes per day and a total of 134.3 petabytes written over five years, the 122.88TB model is designed to last longer than earlier QLC offerings.
Solidigm, a US-based subsidiary of SK Hynix, reportedly tested the drive under extreme conditions. Running 32KB random writes at full load, the drive operated continuously for five years and retained around 5 percent of its life.
Performance claims include up to 930,000 IOPS for 4K random reads and 7.4GBps for sequential reads.
Solidigm markets its large SSD as a solution to space and power constraints in data centers, claiming that replacing traditional hybrid systems with its all-QLC drives could reduce rack usage from nine to one and cut power consumption by around 90 percent.
The drive joins other high-capacity SSDs announced in 2024, including models from Phison, Samsung, and Western Digital. Phison’s SSD supports PCIe Gen5 and offers faster peak throughput, though the D5-P5336 delivers a higher endurance rating and greater storage density.
Kioxia CM9 Series SSDs use 8th gen BiCS FLASH for enterprise performance
Faster NAND speeds and power efficiency support AI and data centers
Offers 61.44TB max, dual-port design, and massive write improvements
Kioxia has announced its CM9 Series PCIe 5.0 NVMe SSDs, marking the first enterprise drives built using its 8th generation BiCS FLASH 3D TLC memory.
With PCIe 5.0 and NVMe 2.0 support, the CM9 SSDs are designed to meet modern standards for data center storage by offering high-efficiency storage capable of supporting AI, machine learning, and high-performance computing.
These new SSDs feature CMOS directly bonded to array (CBA) architecture, an update designed to improve performance, power efficiency, and memory density. Kioxia’s use of CBA-based flash architecture promises faster NAND interface speeds and lower latency, which helps the drives deliver quicker data access and improved power efficiency.
Top-tier bit density
Compared to the previous CM7 series, the CM9 line shows increases of about 65% in random write speeds, 55% in random read, and 95% in sequential write speeds.
The CM9 SSDs, currently sampling to select customers, are built to handle read-intensive and mixed-use workloads in enterprise data centers and offer capacities of up to 61.44TB in 2.5-inch form and 30.72TB in E3.S configurations.
The drives are compatible with both the NVMe-MI 1.2c and OCP Datacenter NVMe SSD 2.5 specifications, and support dual-port configurations, making them suitable for enterprise environments where reliability and continuous access are critical.
Kioxia, which recently helped Linus Tech Tips smash the Pi calculation world record, says gains in power efficiency include roughly 55% better sequential read and 75% better sequential write performance per watt.
Although it’s early in the lifecycle of the CM9 Series, the specs and performance numbers suggest the company is aiming to strengthen its position in high-performance enterprise storage.
Axel Stoermann, Vice President and CTO for Embedded Memory and SSD, Kioxia Europe GmbH, said, “Alongside processing power and energy efficiency, memory is fundamental to enable AI, machine learning, and high-performance computing applications. The CM9 Series powered by our BiCS FLASH generation 8, is designed to address these storage demands, providing top-tier bit density, rapid data transfer, and outstanding power efficiency, all of which contribute to the superior performance of our SSDs."
A dual-GPU design returns, but it’s not meant for gamers this time around
48GB of memory sounds impressive, but will it actually deliver meaningful AI performance?
With no benchmarks and specs, this card is more rumor than revolution right now
Intel may be preparing to launch an unusual graphics card featuring two Arc B580 GPU chips and 48GB of memory, reports have claimed.
While this isn’t an official Intel product, it appears to be a custom design developed by one of Intel’s board partners, who remains unnamed due to non-disclosure agreements.
What makes this card notable is the return of a dual-GPU layout using consumer-class chips, something the industry hasn’t seen in several years.
48GB of memory hints at AI potential
This particular model reportedly combines two B580 GPUs, each paired with 24GB of memory, for a total of 48GB on a single card.
The intent doesn't appear to be gaming, which raises questions about the target audience. Given the high memory and compute potential, one possibility is that it’s intended for AI development or other high-throughput workloads.
Although 48GB still falls short of the memory capacity in top-tier professional accelerators, using consumer-grade GPUs could offer a cost-effective alternative for some training scenarios.
Still, without performance benchmarks or detailed architectural information, it’s difficult to determine whether this configuration could compete with even midrange professional GPUs.
For users comparing it against the best GPUs currently available, skepticism is warranted. No other board partners have been linked to similar designs, and it remains unclear whether this is a one-off experiment or part of a broader strategy.
This development may also interest content creators. With such a high memory ceiling, it could appeal to users seeking the best laptops for video editing or for Photoshop, assuming future mobile variants emerge.
But until more technical data is released, this card is best regarded as a curiosity rather than a sure bet.
Google I/O events are an often frustrating glimpse of the near future, with a lot of shiny software toys scheduled to land sometime "in the coming months". That often means a long wait of up to a year, so for Google I/O 2025 we've rounded every new announcement that you can actually try today.
Naturally, some of the features below come with restrictions – a few are only available to try now in the US, while some are restricted to subscribers of Google's AI Pro or AI Ultra tiers. But many have also rolled out worldwide, so there are new features to take for a spin even if you don't currently pay Google a cent.
What's missing from the list below and coming at a later date? Quite a bit actually, including some of the more futuristic ideas like Google Beam and Android XR, and it also isn't clear how long we'll have to wait for a worldwide rollout of AI Mode for Search, Veo 3, Flow, Virtual Try On in the Shopping app, and Google's top-tier AI Ultra plan.
Still, there are quite a few things from Google I/O 2025 to keep us amused in the meantime, so here's a list of the ones that are available to try today...
Google completely upended its golden goose, Search, at I/O 2025 this week, announcing several new features to stave off the threat of ChatGPT – and the biggest was arguably the US rollout of AI Mode.
If you're in the US and aren't seeing the new tab in Search (or in the search bar of the Google app), it's likely because Google said it'd be a gradual roll-out "over the coming weeks".
We've been using it for a while, though, and have put together a guide on how to master the new AI mode. It shouldn't be your go-to for everything, but we've concluded that "if you’re researching, planning, comparing, or learning, AI Mode can be a real comfort". Google hasn't yet commented on when it'll get a worldwide launch, but we'd imagine it'll be sometime this year.
Arguably the biggest breakthrough moment at Google I/O 2025, Veo 3 is the first AI video generator that can deliver synchronized audio (including speech) alongside its video creations. And it's available to try now for a lucky few, if you're in the US and on the new Gemini Ultra plan.
Granted, that is a pretty small group of people, but we had to include it in this list because it is actually available today for those lucky peeps, and US enterprise users on the Vertex AI platform.
The amount of processing power required for Veo 3 could mean a relatively slow rollout elsewhere, and Google has hinted as much by also releasing new features for Veo 2 like the ability to give it reference scenes.
Not sure how to weave all of your AI videos together into a cohesive whole? Google also addressed that issue with a new AI video editor called Flow – and like Veo 3, it's out now for AI Pro and Ultra subscribers in the US.
It's a bit like a Premiere Pro that you can operate entirely with natural language, to avoid learning keyboard shortcuts or complex menus. To get an idea of how it works, check out Google's short tutorial.
Impressively, it goes as far as giving you menus of camera moves like 'dolly out' and 'pan right', so you don't even have to describe them. Google has also at least promised that it's "coming soon" to more countries, so we're hopeful of a wider rollout in 2025.
The big smartphone story of Google I/O 2025 was the full rollout of one of the best AI tools around on Android and iOS – Gemini Live.
Like ChatGPT's Advanced Voice Mode, Gemini Live is an AI assistant that you can chat to using your voice. The most useful part, though, is that you can also give it eyes using your phone's camera to get help with whatever's in front of you or on your screen.
To conjure the assistant, open the Gemini app on iOS or Android, tap the Gemini Live icon (on the far right of the text input box), and start chatting away.
available worldwide in the Gemini app, Whisk, Vertex AI
(Image credit: Google)
Google didn't just level-up its AI-generated video at I/O 2025 – we also got a new Imagen 4 model for whipping up still images in higher resolution (now up to 2K) than before.
The latest Imagen (which is available now in the Gemini app, Whisk, Vertex AI and across Google Workspace) also showed that it's been working hard on one of its main weaknesses – handling text.
This means that scenes involving typography should no longer be a jumbled mess of weird characters and look more realistic. While Imagen 4 is available to use for free, it does come with usage limits – you can expect 10-20 image generations on a free plan, while Gemini subscribers get a more generous 100-150 generations a day.
Okay, Gemini 2.5 Flash isn't brand new, but it was given a big upgrade at Google I/O 2025 – and it's now available to everyone to dabble with in the Gemini app.
In fact, Gemini 2.5 Flash is now the default model in Google's Gemini chatbot, because it's apparently the fastest and more cost-efficient one for daily use. Some of the specific improvements, over its 2.0 Flash predecessor, include a greater ability to understand images and text.
Wondering how it compares to ChatGPT 4o? We've already compared the two to help you see which might be the best for you. Spoiler: it's a close call, but Gemini 2.5 Flash is particularly appealing if you live in Google's world of apps and services.
Need a coding assistant to speed up your workflow? Google has just given Jules (first introduced as a Labs experiment last December) a wider public beta rollout, with no waiting lists.
Jules is a bit more than a coding copilot – it can autonomously beaver away on fixing bugs, writing tests and building new features without any input from you. It works 'asynchronously', which means it can work on various tasks without waiting for them to finish.
Google says Jules isn't trained on your private code and that your data stays within its private environment. With autonomous agents on the rise, it certainly looks worth dabbling with if you could do with some coding assistance.
Google Shopping has had a 'Try On' feature for clothes since 2023, but it got a big upgrade it got at Google I/O 2025. Rather than using virtual models to show you how your chosen clothes might fit, it now lets you upload a photo of yourself – and uses AI to help you avoid the hassle of changing rooms.
Once you've uploaded a full-length photo of yourself, you'll start to see little "try it on" buttons when you click on outfits that are served up in the Shopping tabs search results. We've taken it for a spin and, while it isn't flawless, it does give you a solid idea of what some clothes will look like on you. And anything that helps us avoid real-world shopping is fine by us.
Google brought its 'Deep Research' feature to Gemini Advanced subscribers (now Gemini Pro) in late 2024. And now the handy reports tool has given a particularly useful upgrade – the ability to combine its research of public data from the web with any private PDFs or images that you uploads.
Google provided the example of a market researcher uploading their own internal sales figures so they could cross reference them with public trends. Unfortunately, you can't yet pull in docs or data from Google Drive and Gmail, but Google says this is coming "soon".
10. Gemini quizzes
available worldwide on Gemini desktop and mobile
college students in the US and UK can also get a free Gemini AI Pro upgrade for the whole school year
Google is particularly keen to get students using its Gemini app – not only did it extend its free access to Google AI Pro for school and university students to new countries including the UK, it also added a new quiz feature to help with revision.
To start a quiz, you can ask Gemini to "create a practice quiz" on your chosen subject. The most useful part is that it'll then make a follow-up quiz based on your weaknesses in the previous test. Not that you have to be studying to make use of this feature – it could also be a handy way to sharpen your pub quiz skills.
If you're a student in the US, Brazil, Indonesia, Japan and the UK, you can get your free year of Gemini AI Pro by signing up on Gemini's students page – the deadline is June 30, 2025 and you will need a valid student email address.
11. Google Meet speech translation
available to Google AI Pro and AI Ultra subscribers
initially only in English and Spanish, more languages coming soon
We're particularly looking forward to trying out Google Beam this year, with the glasses-free 3D video calls (formerly known as Project Starline) heading to businesses courtesy of HP's new hardware. But a new video calling feature you can try now is Google Meet's near real-time translations.
Available now for AI Pro and Ultra subscribers in beta, the feature will provide an audible translation of your speech (currently in English to Spanish, or vice versa) with a relatively short delay. It isn't seamless, but we imagine the delay will only reduce from here – and Google says more languages are coming "in the next few weeks".
Google switched up its AI subscription plans at Google I/O 2025, with 'Gemini Advanced' disappearing and being replaced by AI Pro and new 'VIP' tier called AI Ultra.
The latter is currently US-only (more countries are "coming soon") and costs a staggering $250 a month. Still, that figure does give you "the best of Google AI", according to the tech giant, with AI Ultra including access to Veo 3 with native audio generation, Project Mariner, and the highest usage limits across its other AI products. You also get YouTube Premium and 30TB of storage thrown in.
The AI Pro tier ($20 a month) still gets you access to Gemini, Flow, Whisk, NotebookLM and Gemini in Chrome, but with lower usage limits and cloud storage of a mere 2TB.
If you're an AI power user and like the sound of AI Ultra, Google is currently offering it at 50% off for your first three months. Don't tempt us, Google...
Twelve South's PowerCord is so simple, with a wall plug on one end and a USB-C on the other
It's a one-stop solution for charging small to medium-sized devices
With the cord, you can effectively ditch the power brick as it's integrated
When it comes to charging our devices right now, you generally need a wall plug that goes into an outlet and a cable. For phones – iPhone or Android – that means, say, at least a 20-watt wall plug and then a USB-C to USB-C cable. It doesn’t need to be like this, especially for those who travel.
Twelve South, known for excellent accessories that especially complement Apple devices, just dropped the ‘PowerCord.’ Yes, that’s a product name, not something that comes with the product in the box. It’s a USB-C port cable that ends not with a replica of that port but rather a power adapter.
Thus, it eliminates the need for a wall brick, and if you’re charging a Pixel 9, an iPhone 16 Pro, a Nintendo Switch, an iPad or Galaxy Tab, or even a MacBook Air, you just plug it in to get the charge going.
(Image credit: Twelve South)
It’s fairly genius, right? The 30-watt power supply is integrated into the wall plug, and it comes in two lengths – 4-foot or 10-foot. The cable itself is braided and looks fairly heavy-duty from shared images and comes in a slate black or dune white.
The wall plug is also non-removable. In fact, the whole design is a closed circle on purpose. That way, you can’t leave one part of the equation at home or behind, so when you need to recharge something, it’s all there, whenever you need it.
As of right now, it’s priced at $39.99 for the 4-foot model and $49.99 for the 10-foot model in either color. However, it can only be purchased with a Type-A wall plug. That means it works best in North America, specifically in the United States or Canada. It's up for order now at Amazon or from the brand directly here.
TwelveSouth has said that an EU and UK version is on the horizon and will likely drop in mid to late June. That's excellent news, since for frequent travelers, this is a really nice charger, and I like that you can’t leave any part of it at home.
If it proves to be a success, Twelve South may need to figure out how to put in a larger power supply so it can also handle recharging more power-hungry devices.
(Image credit: Twelve South)
Even so, as it stands, PowerCord can charge phones, tablets, a DJI Osmo Pocket 3, earbuds and headphones, Bluetooth speakers, smart glasses, headsets, and countless other devices. The product page notes that it’s best for small to medium-sized devices but can trickle-charge other products like laptops.
Jony Ive, who famously designed the iPhone (among other iconic Apple devices), is about to become the design lead for OpenAI, the chatCPT AI giant that, for now, does not make a single hardware device.
The Wall Street Journal on Wednesday reported the impending deal, which sees OpenAI acquire Ive's io company in a deal valued at $6.5 billion. As part of that, Ive becomes the design lead for OpenAI, a role he's been slowly-stepping into for some time.
Ive, who famously led Apple's design for decades, left the company in 2019 and, in recent months, has expressed some misgivings about the possible negative impact of the previous products he's worked on (which might include the iPhone).
"I think when you’re innovating, of course, there will be unintended consequences, You hope that the majority will be pleasant surprises. Certain products that I’ve been very, very involved with, I think there were some unintended consequences that were far from pleasant,” said Ive earlier this month, according to the Verge.
While reports indicate that Ive and OpenAI CEO Sam Altman are interested in building AI-capable consumer hardware, a smartphone is probably not on that menu.
Instead, most expect the duo to focus on wearables like earbuds and smartwatches that could be enhanced with, for instance, cameras that could see your surroundings and use onboard AI to help you act on and react to them.
A soft approach
Ive's focus will also apparently be on upgrading OpenAI software's visual appeal. So expect an infusion of Ive-ness on ChatGPT on mobile and the desktop (where it has a particularly techy or dev-friendly look), as well as on Sora and Dall-E interfaces.
In the latter part of his career at Apple, Ive was most responsible for stripping away skeuomorphism – making digital icons look like their real-world counterparts – across Apple's platforms. OpenAI's software doesn't suffer from the skeuomorphic scourge, but some could argue its overall look is less than elegant.
If you're curious if Ive's design skills are still up to snuff, just take a look at the updated Airbnb, which Ive's Loveform firm redesigned. Loveform, by the way, is set to remain a stand-alone company and will, according to The Wall Street Journal, work with OpenAI as a client.
The news must sting Apple a little bit. The company, which partnered with OpenAI to include ChatGPT access in Apple Intelligence, has not only failed to deliver its own generative AI, but is falling behind the industry in delivering a true, combined hardware/software AI experience.
Open AI CEO Sam Altman(Image credit: Getty Images / Tomohiro Ohsumi / Stringer)
Hints of hardware to come
It'll be fascinating to see what Altman and Ive cook up, and we already have some hints.
Altman announced the deal by tweeting that he's "excited to try to create a new generation of AI-powered computers." Taken literally, we might expect an AI PC from the team, but I think here Altman means "computers writ large" in that most intelligent consumer electronics could be considered computing devices.
The tweet was accompanied by a video featuring a conversation between Ive and Altman, in which Altman described developing "a family of devices that would let people use AI to create all sorts of different things."
Without disclosing the product, Ive revealed that "the first one we've been working on has almost completely captured our imagination." Further, Altman added that Ive handed him the device to take home. "I've been able to live with it and I think it's the coolest piece of technology that the world will have ever seen."
No matter what they're building, it's worth remembering that the road to AI hardware success is already littered with the rotting carcasses of failed ventures like Human AI. Regular people have not shown great interest in wearing AI hardware that doesn't align with their current fashion choices.
thrilled to be partnering with jony, imo the greatest designer in the world.excited to try to create a new generation of AI-powered computers. pic.twitter.com/IPZBNrz1jQMay 21, 2025
That said, there may be an opportunity for OpenAI, Ive, and Altman in the smart glasses space. It's the one AI-connected device area that appears to be showing some real signs of life. That's mostly down to Meta's efforts with Ray Ban Meta Smart Glasses, but also evidenced by the upcoming influx of Android XR competitors from Google partners Samsung, Warby Parker, and others. Some were announced this week at Google I/O 2025, and all of them will feature Gemini at their core.
OpenAI and ChatGPT may be leading in the generative AI space, but Google Gemini is close behind. And if Android XR partners can deliver stylish Gemini Smart Glasses this year, it could quickly vault Gemini into the lead. At the very least, this puts pressure on OpenAI to deliver something.
Is Jony Ive the secret sauce that will make ChatGPT AI glasses, earbuds, smart watches, and other consumer hardware possible and desirable? Maybe. OpenAI says we'll see their work next year. Just don't expect a ChatGPT Phone.
As we covered last week, Google held its Android Show event which showed off Wear OS 6. It's just as well it did, as the company's larger I/O event went by with barely a mention of the new version of the Pixel Watch OS.
Still, given Google's focus on AI, it's not unexpected that Gemini is coming to Wear OS this year. However, that's not all that's changing.
We've covered everything planned for the tech giant's new wearable OS version below, including a fresh look, easy-to-read notifications and information, and even Gemini jumping the fence.
Here's everything we know about that's coming to the platform soon.
Cut to the chase
What is it? The latest version of Google's wearable platform
When is it out? July 2025 is our best guess
What will it do? Bring Gemini to more devices, revamp the look of the OS, and improve battery life
Release date prediction
Major Wear OS releases often arrive in July, and we're expecting the same this time around, too.
Wear OS 6 has already been released to developers as part of Google's Developer Preview program, so it's only a matter of weeks.
Below, you can find the headline features Wear OS 6 will bring to Pixel Watches first, and other watches from competitor brands such as Samsung later in the year.
From retaining small pieces of information like which locker you're using at the gym, to creating a bespoke playlist with a quick request or tapping into personal context, Gemini on your wrist could be super helpful in a bunch of small ways.
Better still, it'll run on your current device as long as your wearable supports Google Assistant, which means you won't need to splurge on a new model unless you really want to.
2. A visual revamp
(Image credit: Google)
Android 16's new 'Material 3 Expressive' look is expected to modernize Google's OS on phones, and that's extending to Wear OS, too.
Users can expect a change to more rounded UI elements, reducing the boxiness of the interface and updating animations to make better use of the space available.
Examples such as the above have shown the UI shrinking as it leaves the view of the user, focusing more closely on what's in the center of the screen.
3. Information at a glance
(Image credit: Google)
That updated UI ties into a new set of buttons that can display key information.
These are intended to be glanceable, so they'll grow to fill the available space on display to allow users to read things like calendar appointments and messages more clearly in a split second.
With all these changes, it certainly feels like Google is honing in on its circular display, and it's definitely something that helps it offer something a little different to the squircle offered by the best Apple watches.
4. Better battery life
(Image credit: Future/Lance Ulanoff)
One of our biggest concerns with all these slick new animations and AI features was having Wear OS 6 eat into the battery life of our devices, particularly since we're not necessarily having to buy a new one.
Thankfully, it sounds as though Google heard our prayers.
"With Wear OS 6, we’re continuing to improve performance and optimize power — in fact, this update delivers up to 10% more battery life," it said.
It might sound like a small margin, but in practice, that's an extra 2.4 hours of wear for a device like the Google Pixel Watch 3, which has a 24-hour battery life.
Google just launched a new ultra premium AI subscription service
Titled Google AI Ultra, this new subscription is available in the US and costs $250 a month
Coming to more countries soon, you can sign up today in the US and receive 50% off your first three months
Google has just announced a new premium AI subscription plan called Google AI Ultra in the US, and it costs a staggering $250 a month.
Announced at Google I/O 2025, Google says AI Ultra is a subscription plan with "the highest usage limits and access to our most capable models and premium features."
"If you're a filmmaker, developer, creative professional or simply demand the absolute best of Google Al with the highest level of access, the Google Al Ultra plan is built for you - think of it as your VIP pass to Google Al."
Google is launching the new subscription with a special introductory offer that gets you a 50% discount for the first three months. AI Ultra will be available in more countries soon.
Google has also renamed its existing premium plan to Google AI Pro.
(Image credit: Google)
What do you get for $250
Google AI Ultra is expensive, there's no denying it. That said, it's a premium plan for those who need access to some of the best AI tools on the planet.
Here's a list of everything Google says you'll get with Google AI Ultra and that incredible price tag.
'Gemini: Experience the absolute best version of our Gemini app. This plan offers the highest usage limits across Deep Research, cutting-edge video generation with Veo 2 and early access to our groundbreaking Veo 3 model. It's designed for coding, academic research and complex creative endeavors. In the coming weeks, Ultra subscribers will also get access to Deep Think in 2.5 Pro, our new enhanced reasoning mode.'
'Flow: This new Al filmmaking tool is custom-designed for Google DeepMind's most advanced models (Veo, Imagen and Gemini). It enables the crafting of cinematic clips, scenes and cohesive narratives with intuitive prompting. Google Al Ultra unlocks the highest limits in Flow with 1080p video generation, advanced camera controls and early access to Veo 3.'
'Whisk: Whisk helps you quickly explore and visualize new ideas using both text and image prompts. With Google Al Ultra, get the highest limits for Whisk Animate, which turns your images into vivid eight-second videos with Veo 2.'
'NotebookLM: Get access to the highest usage limits and enhanced model capabilities later this year, whether you're using NotebookLM for studying, teaching or working on your projects.'
'Gemini in Gmail, Docs, Vids and more: Make everyday tasks easier with access to Gemini directly in your favorite Google apps like Gmail, Docs, Vids and more.'
'Gemini in Chrome: Starting tomorrow, get early access to Gemini directly within the Chrome browser. This feature allows you to effortlessly understand complex information and complete tasks on the web by using the context of the current page.'
'Project Mariner: This agentic research prototype can assist you in managing up to 10 tasks simultaneously - from research to bookings and purchases - all from a single dashboard.'
'YouTube Premium: An individual YouTube Premium plan lets you watch YouTube and listen to YouTube Music ad-free, offline and in the background.'
'30 TB of storage: Offers massive storage capacity across Google Photos, Drive and Gmail to keep your creations and important files secure.'
As you can see, Google AI Ultra is for a niche audience, but arguably could be worth its high price tag if you're a heavy AI user.
Stay tuned to TechRadar for more Google Gemini news and watch out for our thorough testing of all the new AI features announced during Google I/O 2025.
MSI showed off its latest gaming hardware at Computex 2025, and the MPG 272QR QD-OLED X50 gaming monitor and the MEG Vision X AI gaming desktop stole much of the show among attendees, thanks to their artificial intelligence integration that promises a more responsive, user-aware gaming experience.
Starting with the MSI MPG 271QR QD-OLED X50, MSI’s latest 1440p gaming monitor combines a 500Hz refresh rate, 0.03ms pixel response time, 99% DCI-P3 color gamut coverage, VESA ClearMR 21000 and DisplayHDR True Black 50 certification, and new AI-driven enhancements.
At the center is the display’s AI Care Sensor, a system that uses a dedicated neural processing unit (NPU) and CMOS sensor to detect human presence every 0.2 seconds.
(Image credit: Future / John Loeffler)
This hardware allows the monitor to dynamically adjust brightness and activate OLED protection features based on user activity, rather than interrupting the user experience with necessary OLED panel maintenance.
In practice, this means that when no human presence is detected, the system powers down the display or enables OLED Care when necessary, helping to conserve energy and prevent burn-in.
Complementing this is MSI’s AI Navigator, a unified interface that centralizes control of all AI-driven settings. It streamlines adjustments for optimal performance, ensuring users get the most out of their hardware with minimal manual tuning.
(Image credit: Future / John Loeffler)
Alongside the monitor, MSI also introduced the second generation MSI MEG Vision X AI, a powerful gaming desktop first introduced at Computex 2024 that is engineered with AI-enhanced performance tuning and a full human-machine interface (HMI) touch display.
Some of the AI features incorporated into the PC is a system that automatically manages fan speeds, power settings, and cooling based on real-time usage data, with the goal of delivering both responsiveness and efficiency.
The HMI has been given a new, simplified interface that MSI calls EZ Mode which makes the HMI system more intuitive for users, especially those who might not be the kind of power users used to outfitting their PCs with various hardware monitoring tools and utilities.
In addition, the MEG Vision X AI has the ability to connect to select MSI monitors and control a monitor's OSD menu, letting you adjust things like brightness, color presets, and more, right from the 13-inch display on the front of the PC.
The Vision X AI's HMI can also detect when you are using certain kinds of apps and adjust settings, backgrounds, and HMI widgets accordingly, such as turning up performance mode when you are playing a game and displaying performance metrics like FPS and GPU load. While the mode switching is done automatically by default, you are also able to switch between them manually if you want, and even assign certain apps to the different modes
817 Microsoft software engineers lost their jobs in Washington state alone
The redundancies were believed to be targeting inefficient management layers
Around a third of Microsoft's code is AI-written, Google and Meta are also in a similar place
Microsoft recently confirmed around 6,000 to 7,000 job cuts globally, including an estimated 2,000 redundancies in its home state of Washington.
It's now come to light that over 40% of the Washington layoffs were related to software engineering (817 roles) (via Bloomberg), with the company previously stating that the layoffs were part of a broader cost cutting effort and a shift in investments into AI.
Together with software engineers, the heaviest affected roles in Washington were product management (373 roles) and technical program management (218 roles), with business program management (55 roles), customer experience program management (44 roles) and product design (31 roles) also on the table.
Over 800 Microsoft software engineers laid off in Washington state
Despite the clear and ongoing need for software engineers in an increasingly software-defined world, it has become apparent that Microsoft deer appropriate to replace human workers with artificial intelligence. CEO Satya Nadella recently confirmed that AI now writes around one third of some projects' code, with the recent layoffs raising concerns about AI's effects on human workers and software developers.
More broadly, this is a trend that we are seeing from other tech companies including Salesforce and Workday. Google's CEO Sundar Pichai and Meta's CEO Mark Zuckerberg have also noted how much of their code is now written by AI.
However, Microsoft has been criticized for mixed messaging. The company stated that the recent layoffs were primarily designed to reduce inefficiencies in middle management by removing unnecessary layers, and while 17% of the Washington redundancies did relate to managers, the loss of hundreds of software engineers raises alarm bells.
Microsoft Principal Software Engineering Manager Mike Droettboom suggested in a LinkedIn post that Python and open-source remain important roles even though companies are enacting major shifts: "Looking around the room, I saw so many faces – some I have known for almost 25 years – coming together again with the same shared purpose, even as the company names on our badges change."
"My heart goes out to the majority of the team that was laid off," Droettboom added.
TechRadar Pro has asked Microsoft for further transparency into the roles affected by its redundancies.
Amazon's rollout of Alexa+ lacks much public evidence
Technical issues may be delaying a wider release
Amazon claims Alexa+ is in use by hundreds of thousands of homes
Amazon unveiled Alexa+ with great fanfare more than six weeks ago, but there hasn't been much of a conversation among AI and voice assistant users about it since. My informal check of more than a dozen heavy Alexa users around the U.S. found none with access to it, and a report from Reuters suggests it's far from the explosive event Amazon hyped it up to be at the debut presentation.
Alexa+ is supposed to be Amazon's infusion of AI into the eleven-year-old voice assistant. Using generative AI as a glow-up tool makes Alexa smarter, more useful, better at conversation, and just more intuitive as an assistant. Alexa+ is supposed to give the voice assistant many new and enhanced abilities to carry out your requests, such as processing multiple prompts at once and adapting to personalize its services. For instance, it should remember your dietary preferences while helping you order food.
Invites for early access were meant to start going out in late March. Anecdotally, none have arrived, and a look around social media doesn't reveal any buzz either. Here at TechRadar, Alexa has, for weeks, been telling Editor at Large Lance Ulanoff that he's "on the early access list," but there's still no sign of Alexa+.
Even a Reddit post covered by TechRadar has since been removed from the website. Amazon begs to differ about that conclusion. The company is expressing confidence over the current and future release of Alexa+.
"Early Access to Alexa+ is ramping up. It’s already open to hundreds of thousands of customers, and we expect it to roll out to millions over the coming month," an Amazon spokesperson told TechRadar. "This is no different than other invite programs we’ve run – we scale as we learn."
Alexa+ plans
As Amazon insists there is no slow-walking of Alexa+, the reasons behind an apparent delay aren't official either. That said, the Reuters report cited possible technical issues around the speed and accuracy of the revamped Alexa, as well as higher-than-preferred costs to run the new models. There's a bit of déjà vu here since Amazon made a lot of noise around an AI-enhanced Alexa in the fall of 2023, with an early preview promised in the weeks ahead that never actually happened.
It's a far cry from the 2014 reveal of the original Amazon Echo, which started shipping just a few weeks after it appeared on a stage. Amazon might feel the stakes are too high to prioritize timing over performance this time. If Alexa+ fumbles at launch, it could undercut Amazon’s entire smart home strategy. Worse, it might reinforce the idea that Alexa is more of a talking timer than a true digital assistant.
Amazon also recently made it so Alexa interactions are processed only in the cloud, removing the option for local processing. This change may boost Alexa+’s brainpower, but it also raises privacy flags that may need to be dealt with before a wide release.
So, Alexa+ technically exists, and Amazon swears it’s being used. But you'll have to wait for a review of Alexa+ from someone's home. Until then, Alexa+ is more ghost than AI ghost in the machine.
Google has been rumoured to be developing a new dedicated first-party desktop mode for Android phones and tablets for years now, and it may be closer to launch than ever before. As per a new leak, the feature, dubbed Android Desktop Mode, was previously expected to arrive with Android 16 this year but may now see its release with Android 17. It is speculated to offer ...
MSI EdgeXpert sounds impressive, but calling it a supercomputer might be stretching reality
Desktop AI supercomputers are a trend, but their usefulness still lacks real-world validation
MSI’s EdgeXpert could be ideal for developers needing local AI power without relying on the cloud
MSI is the latest entrant in the race to miniaturize AI infrastructure with its upcoming EdgeXpert MS-C931, a compact desktop system positioned as an AI supercomputer.
Following the launches of the Dell Pro Max with GB10 and the Asus Ascent GX10, MSI’s new machine is built on Nvidia’s DGX Spark platform and will be showcased at COMPUTEX 2025.
While the hardware sounds formidable, questions remain about whether this device truly lives up to the lofty label of a "desktop AI supercomputer", or if it’s simply a case of marketing overreach.
A powerful machine built on familiar ground
The EdgeXpert MS-C931 is powered by Nvidia’s GB10 Grace Blackwell Superchip, delivering up to 1,000 TOPS of AI performance (FP4), 128 GB of unified memory, and ConnectX-7 high-speed networking.
MSI says the system targets sectors like education, finance, and healthcare, where data privacy and low latency could justify on-premise hardware over cloud-based services.
Given its specs, the MS-C931 could rank among the most capable workstation PCs currently in development. Its high memory bandwidth and AI-focused compute also suggest it could be a top-tier PC for coding, especially for machine learning or large-scale simulation tasks.
However, the real value of this product depends less on its raw specs and more on how grounded MSI’s claims about its purpose truly are.
The phrase “desktop AI supercomputer” continues to be used liberally, and MSI’s adoption of it raises similar concerns to those previously leveled at Asus and Dell.
A supercomputer, by definition, implies massive parallel processing power, usually deployed across large-scale server racks. Shrinking that concept down to a single desktop machine, even with cutting-edge components, feels more like branding than technical accuracy.
MSI isn’t alone in this; Nvidia’s DGX Spark framework itself seems at least partially designed to enable this kind of positioning.
For all the talk of supporting top-tier AI tools and delivering enterprise-grade performance at the edge, there’s currently little evidence that these systems approach the breadth or scalability of true supercomputing infrastructure.
Even 1,000 TOPS, while impressive, must be understood in the context of what modern AI teams actually require to train or run LLMs.
While MSI may succeed in delivering a dense, high-performance system for localized inferencing and AI prototyping, the real-world utility of the MS-C931 is likely narrower than the “supercomputer” label implies.
Until these machines prove their value in practice, calling them desktop supercomputers feels more like aspirational branding than a reflection of what they truly deliver.
Hello and welcome to our live coverage of Dell Technologies World 2025.
We're on the ground in Las Vegas for this year's event, and are all set for an event which will be sure to be packed full of news and announcements.
The event starts tomorrow with a star-studded keynote from company founder and CEO Michael Dell, so check back then for all the updates as they happen.
Good morning from sunny Las Vegas!
TechRadar Pro is here and all set for Dell Technologies World 2025, which is set to kick off tomorrow, so check back then!
from Latest from TechRadar US in News,opinion https://ift.tt/sj4yHUF
Transparent Micro LED screen displays different content on either side simultaneously
Ultra-thin 17.3-inch design blends futuristic aesthetics with real-world functionality
Maker AUO hints at aviation, retail, and interior uses for dual display
Transparent screens on devices like smartphones and tablets have long been a staple of sci-fi films and TV shows because they look good, even if they aren’t always practical. Now, though, they’re starting to become a reality.
Taiwanese display manufacturer AUO (AU Optronics Corporation), which was formed in 2001 through the merger of Acer Display Technology and Unipac Optoelectronics Corporation, has demonstrated a dual-sided transparent Micro LED display at Touch Taiwan 2025.
This first of its kind display is a thin 17.3-inch screen that offers a transparent experience on both sides, and can present different content depending on the viewing angle.
For use on planes and in homes and stores
The screen can show separate images or data on each side, and AUO suggests one possible use case would be in first-class airline cabins, where passengers and flight attendants can each see their own interfaces.
AUO’s demo included a translation interface, presenting seamless multilingual communication through the display itself. Commercial scenarios such as store windows, museum exhibits, and digital signage are also seen as natural fits for the technology.
The ultra-thin design, combined with transparent Micro LED technology, represents a shift from traditional display use toward something closer to ambient computing.
Unlike single-sided transparent OLEDs, which often struggle with brightness and image clarity in direct light, AUO’s Micro LED tech offers higher brightness and color performance - potentially overcoming many of those limitations.
AUO has not revealed when it expects the display to go into production, nor has it given any hint at pricing, although it’s fair to say the screens won’t be cheap.
A video posted on YouTube shows the screen in use at the 50-second mark.