especially for exploratory work 1/10th the perf is fine. Intel isn't able to compete head to head with Nvidia (yet), but vram is capability while speed is capacity. There will be plenty of use cases where the value prop here makes sense.
The product would be excellent in 2024, but now it's a landfill filler. You can run some small models at pedestrian speed, novelty wears off and that's it.
Intel is not looking in the future. If they released Arc Pro B70 with 512GB base RAM, now that could be interesting.
I think this shows a shift in model architecture. MOE and similar need more memory for the compute available than just one big model with a lot of layers and weights. I think this is likely a trend that will accelerate. You build the trade-off in which encourages even more experts which means more of a tradeoff, so more experts.....
Most people doing local inference run the MoE layers on CPU anyway, because decode is not compute constrained and wasting the high-bandwidth VRAM on unused weights is silly. It's better to use it for longer context. Recent architectures even offload the MoE experts to fast (PCIe x4 5.0 or similar performance) NVMe: it's slow but it opens up running even SOTA local MoE models on ordinary hardware.
I think you are making my point. Having a little slower, but a lot more, memory on the card would speed this use-case up a lot and remove the need to go to system memory or make it available for very rarely used experts allowing for even larger MOE models running with good performance.
intel support has been mild to non existent in the VR space unfortunately. Given the very finicky latency + engine support i wouldn’t bet on a great experience, but hope for the best for more competition in this market. (even amd has a lot of caveats comparing to nvidia)
Footnotes:
* critical "as low as it can be" low latency support on intel XE is still not as mature as nvidia, amd was lagging behind until recently.
* Not sure about "multiprojection" rendering support on intel, lack of support can kill vr performance or make it incompatible. (the optimized vr games often rely on it)
It looked like when Intel jumped into this space, they tried to do everything at once. It didnt work well, they were playing catch up to some very mature systems. They are now being much more selective and restrained. The down side is that things like VR support are put on the back burner for years.
Good for most people but if you need that fuctiobality and they dont have it, go somewhere else.
Running dual Pro B60 on Debian stable mostly for AI coding.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
Ive ran arc on fedora for years and for general desktop use it’s been perfect. For llm’s/coding it’s getting better but it’s rough around the edges. Had a bug where trying to get vram usage through pytorch would crash the system, ect.
There was the video a little while back where LTT built a computer for Linus Torvalds and they put an Intel Arc card inside, so I'd imagine Linux support is at the very least, acceptable.
Consumer CPUs don't have enough PCIE lanes to do that. Even if they had physical x16 slots, at most two of them would be x16.
What's cheap to you? You can find Epyc 7002/7003 boards on ebay in the $400 range and those will do it. That's probably the best deal for 4x PCIE 4.0 x16 and DDR4. Probably $500 range with a CPU. That's in the ballpark of a mid to high end consumer setup these days.
I want to spend $1500 for a card that can run a proper large model, even if it only can do 25 tk/s.
Intel is squandering a golden opportunity to knee-cap AMD and Nvdia, under the totally delusional pretense that intel enterprise cards still have a fighting chance.
Since they fired the entire Arc team and a lot of the senior engineers already updated their Linkedins to reflect their new positions at AMD, Nvidia, and others, as well as laying off most of their Linux driver team (GPU and non-GPU), uh...
The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.
Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.
There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).
They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.
There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.
For further information, see Moore's Law Is Dead's coverage off and on over the past year.
You may be a bit too credulous. There has been a "leak" or "rumor" that Intel's GPU initiatives are canceled about once every three months, for over two years. Yet Intel continues to release new SKUs and make new product announcements. Just last month they announced a new data center GPU product (an inference-focused variant of Jaguar Shores).
I can't see the future, but I can see patterns: the media that reports straight from the industry rumor mill LOVES this "Intel has cancelled its GPUs" story, for whatever reason. I have no particular love for Intel (out of my six current systems, my only Intel box is a cheap NUC from 2018), but at this point, these rumors echo the old joke about economists who "accurately predicted the last nine out of two recessions".
This is a chip they've had lying around for a while. It's the same architecture as used in the Arc B580 that launched at the end of 2024; this is just a slightly larger sibling. Intel clearly knew that their larger part wouldn't make for a competitive gaming GPU (hence the lack of a consumer counterpart to these cards), but must have decided that a relatively cheap workstation card with 32GB might be able to make some money.
Does it need a huge driver team pushing out big updates in order to be suitable for the kind of Pro use cases it's targeted at? They're explicitly not going after the gaming market so they don't need to be on the treadmill of constant driver updates delivering workarounds and optimizations for the latest game releases.
They're still going to be employing some developers for driver maintenance for the sake of their iGPUs, and that might be enough for these cards.
I didn't know this. Have they officially given up on building discrete GPUs? Is this a last gasp of Arc to offload decent remaining architectures at a lower price than nvidia?
It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.
> It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.
You still need to fab it somewhere. Intel's fabs have been plagued with issues for years, the AI grifters have bought up a lot of TSMCs allotments and what remains got bought up by Apple for their iOS and macOS lineups, and Samsung's fabs are busy doing Samsung SoCs.
And that unfortunately may explain why Intel yanked everything. What use is a product line that can't be sold because you can't get it produced?
Yet another item on my long list of "why I want to see the AI grift industry burn and the major participants rotting in a prison cell".
Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.
How many compatibility issues is MacOS realistically expected to spur? Windows DX felt unusable to me without a Linux VM (and later WSL), but on MacOS most tooling just kinda seems to work the same.
It’s not the tooling for me, macOS is just bad as a server OS for many reasons. Weird collisions with desktop security features, aggressive power saving that you have to fight against, root not being allowed to do root stuff, no sane package management, no OOB management, ultra slow OS updates, and generally but most importantly: the UNIX underbelly of macOS has clearly not been a priority for a long time and is rotting with weird inconsistent and undocumented behaviour all over the place.
Linux is not immune to BIOS/UEFI firmware attacks either. Secure Boot, TPM, and LUKS can work well together, but you still depend on proprietary firmware that you do not fully control. LogoFAIL is a good example of that risk, especially in an evil maid scenario involving temporary physical access. I think Apple has tighter control over this layer.
Provisioning, remote management, containers, virtualization, networking, graphics (and compute), storage, all very different on Mac. The real question is what you would expect to be the same.
For server usage? macOS is the least-supported OS in terms of filesystems, hardware and software. It uses multiple gigabytes of memory to load unnecessary user runtime dependencies, wastes hard drive space on statically-linked binaries, and regularly breaks package management on system upgrades.
At a certain point, even WSL becomes a more viable deployment platform.
My thinking is that I'd pick this, because I can't just plug a Mac into a slot in my server and have it easily integrate with all my other hardware across an ultra fast bus.
If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.
~$1000 for the Pro B70, if Microcenter is to be believed:
https://www.microcenter.com/product/709007/intel-arc-pro-b70...
https://www.microcenter.com/product/708790/asrock-intel-arc-...
https://www.bhphotovideo.com/c/product/1959142-REG/intel_33p...
When 32GB NVIDIA cards seem to start at around $4000 that's a big enough gap to be motivating for a bunch of applications.
Intel is not looking in the future. If they released Arc Pro B70 with 512GB base RAM, now that could be interesting.
32GB? Meh.
intel support has been mild to non existent in the VR space unfortunately. Given the very finicky latency + engine support i wouldn’t bet on a great experience, but hope for the best for more competition in this market. (even amd has a lot of caveats comparing to nvidia)
Footnotes:
* critical "as low as it can be" low latency support on intel XE is still not as mature as nvidia, amd was lagging behind until recently.
* Not sure about "multiprojection" rendering support on intel, lack of support can kill vr performance or make it incompatible. (the optimized vr games often rely on it)
Good for most people but if you need that fuctiobality and they dont have it, go somewhere else.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
[1] https://www.youtube.com/watch?v=mfv0V1SxbNA
Probably 160 GB for $4,000.
What's cheap to you? You can find Epyc 7002/7003 boards on ebay in the $400 range and those will do it. That's probably the best deal for 4x PCIE 4.0 x16 and DDR4. Probably $500 range with a CPU. That's in the ballpark of a mid to high end consumer setup these days.
I want to spend $1500 for a card that can run a proper large model, even if it only can do 25 tk/s.
Intel is squandering a golden opportunity to knee-cap AMD and Nvdia, under the totally delusional pretense that intel enterprise cards still have a fighting chance.
WTF?
The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.
Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.
There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).
They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.
There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.
For further information, see Moore's Law Is Dead's coverage off and on over the past year.
I can't see the future, but I can see patterns: the media that reports straight from the industry rumor mill LOVES this "Intel has cancelled its GPUs" story, for whatever reason. I have no particular love for Intel (out of my six current systems, my only Intel box is a cheap NUC from 2018), but at this point, these rumors echo the old joke about economists who "accurately predicted the last nine out of two recessions".
They're still going to be employing some developers for driver maintenance for the sake of their iGPUs, and that might be enough for these cards.
It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.
You still need to fab it somewhere. Intel's fabs have been plagued with issues for years, the AI grifters have bought up a lot of TSMCs allotments and what remains got bought up by Apple for their iOS and macOS lineups, and Samsung's fabs are busy doing Samsung SoCs.
And that unfortunately may explain why Intel yanked everything. What use is a product line that can't be sold because you can't get it produced?
Yet another item on my long list of "why I want to see the AI grift industry burn and the major participants rotting in a prison cell".
Linux is not immune to BIOS/UEFI firmware attacks either. Secure Boot, TPM, and LUKS can work well together, but you still depend on proprietary firmware that you do not fully control. LogoFAIL is a good example of that risk, especially in an evil maid scenario involving temporary physical access. I think Apple has tighter control over this layer.
At a certain point, even WSL becomes a more viable deployment platform.
If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.